1 2 3 11 Previous Next

Free Press

165 Posts

Intel Doctor Practices Technology to Make Healthcare Safer, More Efficient and Accessible

 

blatt1.JPGSharing data between clinicians and patients is key to "breaking down silos," according to Mark Blatt, who believes the mobile and tablet revolution has had a dramatic, positive effect on healthcare. (Flickr photo)

As Intel's resident M.D., Mark Blatt doesn't wear a stethoscope around his neck and he doesn't treat employees. Instead of medical devices, his tools are tablets, smartphones, wireless networks and relational databases. The doctor-turned-technology evangelist is seeking ways to double the number of patients that a doctor sees while cutting the cost of treating them by half or more. To reach that goal, Intel's worldwide medical director draws on the same principles that Intel has applied to semiconductor manufacturing -- pursuing what he calls it a "Moore's Law for healthcare" to slash costs.

 

Before he came to Intel in 2000, Blatt saw patients for 15 years as a general practitioner. Now he's working with patient data -- digitizing it, sharing it and making it more accessible. Recently, Blatt discussed how he sees technology being used in healthcare settings and what it will mean for patients and healthcare providers now and into the future.

 

You've talked about a "Moore's Law for Healthcare." What is it exactly?

 

Back in the mid-1990s [when Blatt was a practicing physician] one Saturday morning when I was on call I cared for about 80 percent of patients on the phone and I realized that I'd "seen" more people in those 4 hours than I usually see in a [normal] week. I had taken approximately 30 to 60 seconds for each visit. The price point was cheap; in fact, it was free. For the patients it was extremely convenient and timely medicine.

 

What I wondered was how to replicate that Saturday morning experience to deliver inexpensive healthcare. How can we deliver inexpensive healthcare conveniently for the masses? When I say inexpensive, I mean pennies on the dollar. How can we do what Intel does in manufacturing, but for healthcare: double the number of patients [a doctor treats] and cut the cost of treating them by 50 percent at a minimum? That's what I like to call a "Moore's Law for healthcare."

 

If I have $100 for care of a single person I now want to cut the cost in half and double volume so I can see two people for $50. That points in the direction of an order of magnitude reduction in care costs as opposed to a 5-10 percent savings, which is not going to cut it.

 

How can technology support that level of cost reduction?

 

I like to talk about four major functions that technology needs to bring to healthcare. The first one is you have to digitize data. You have to gather data. Everything that exists right now in paper-based format, from lab tests to notes to advice to patients to appointment schedules. Everything that exists has to be digitized. We're starting to see that happening. I would say the number of clinicians using electronic records in the United States in the last 3 to 5 years has tripled.

 

Next, we need to look at how we share data. How do we break down silos and take that data that you've accumulated about people at your practice, at your hospital, in your group and share more broadly to where care is actually needed?

 

Then we need to share [data] on mobile devices. That had been slow in coming until the evolution of smartphones and consumer tablets led to the consumerization of IT. I'd say that in the last 2 years every hospital in the United States has tried a consumer media tablet, specifically the iPad, and I would venture a guess that three-quarters or more of doctors have smartphones.

 

The final step after gather, share and mobilize is to empower patients and that is an emerging trend with social media.

 

How has technology changed the patient experience?

 

I think it's made the patient safer. That's the fundamental thing patients don't quite grasp. Healthcare as we've been practicing it over the last 20 years seems like a really dangerous art. People are really committed to safety, but the processes, the steps we have to go through, have become so complicated that to get it wrong, to make a mistake, is almost commonplace. The landmark article in this came out [in 2000] by the Institute of Medicine called "To Err is Human." Everyone responded to this with the understanding that somewhere between 48,000 to 98,000 people a year died in U.S. hospitals due to process errors. Some follow-on studies thought they underestimated. It is not out of maliciousness, it's not out of carelessness -- it's that the processes are extremely complicated.

 

What is the greatest challenge for the healthcare industry when it comes to new technology?

 

Uniform adoption and dissemination of technology. [Former Intel CEO] Andy Grove once wrote an article for JAMA, the Journal of the American Medical Association, one of the most prestigious journals in the world, about the diffusion time for innovation. He pointed out that the diffusion time for an idea people think is right is 18 years, so that by the time the idea disseminates it's either old news or we're onto the next one. And that's probably the biggest fundamental issue for healthcare. Practices that have value and benefit don't disseminate easily and quickly across this cottage industry -- that there are too many individuals, nodes, sites and practitioners and we don't get this more uniform approach.

 

blatt3.JPGAs Intel's worldwide medical director, Mark Blatt shares his experience as a general practitioner with business development managers in hopes of bringing more efficiency, cost-effectiveness and safety to healthcare through technology. (Flickr photo)

What's an example of technology changing a familiar healthcare process?

 

IT has introduced systems that can do things over and over again without making mistakes. Introducing end-to-end e-prescribing or the doctor writes an order electronically -- the order is checked electronically for drug, dose, against the other drugs in the database, against allergies ... the pharmacist uses an electronic robot picker to assemble a task so there is very little chance of getting the wrong medicine. They are using proven technology from other industries like barcoding or RFID tagging to make sure the medicine is administered correctly. It can virtually eliminate prescription errors which by various estimates kill about 17,000 people out of those 50,000 to 100,000 a year at hospitals.

 

What's the next big change you see in healthcare?

 

In the short term, it's mobile. Going mobile is no longer an option; it's an imperative in healthcare. The advent of mobility in our personal lives has convinced us that there is tremendous value not only for clinicians of all types -- doctors, nurses, physical therapists, ambulance drivers, pharmacists -- but citizens as well. Programs that monitor your heart rate, your footsteps, your body mass index, make recommendations that are relevant to you about diet when you walk to a restaurant based on your morning's blood sugar level.

 

We'll see machine-to-machine connected devices where monitoring systems automatically dump [information] into database systems. Take home diabetes tests. I prick my finger and the results are entered. These things will happen seamlessly. And maybe the tests become increasingly noninvasive. I stand in front of a mirror and it measures the subcapillary pulse in my face and it takes my heartbeat. I'll make a guess that between 5 and 10 years from now about 10 percent of all data that goes into electronic record is typed and 90 percent goes in by some other mechanism like gesture, speech, touch, automation.

 

What is going to get people to change versus sticking with the status quo?

 

Healthcare is a very personal service. We are used to consuming it in a certain manner. So how do I get consumers to want to crave the digital experience over the face-to-face experience? Evidence shows that in many ways the digital experience is safer, more convenient and less costly. Citizens are going to realize that accessing care online digitally is not just more convenient and less expensive, but is actually safer. As the tools become more robust that's going to become a more dynamic experience and [physicians] are going to be able to get into complexities they couldn't before. So I think getting citizens to want to change their expectation about how they consume healthcare is something we haven't even begun to do yet.

 

 

Related stories

 

'Train the Trainer' Approach Helps Teachers Engage Technology for Learning

 

shelleyshottsitting.jpgAs a former middle school science teacher, Intel Teach global K-12 education manager Shelley Shott (second from left) has hands-on classroom experience. (Flickr photo)

Technology is now common in many classrooms, but teachers don't always know how to use hi-tech to improve student learning. To help close that gap Intel began offering a Teach to the Future course in 1999 that supports teachers with integrating technology into their lessons to prepare students to collaborate, communicate, think critically and solve problems. Twenty students were in the first course 12 years ago. Since then, the Intel Teach program has evolved to offer a portfolio of courses for both K-12 and higher education and has trained more than 10 million teachers in 70 countries.

 

As the global K-12 education manager, Shelley Shott brings real classroom experience to the Intel Teach strategy that she and her team develop and implement. Before joining Intel in 2005, she taught middle school for 12 years. That hands-on perspective informs her perspective on the on the challenges and opportunities of being a classroom teacher in an era where tech is high and budgets are often low. Recently, Schott discussed the importance of shifting from a teacher-centered to a student-centered classroom and how technology can help enhance students' 21st century skills.

 

What is Intel Teach?

 

Intel Teach is a portfolio of teacher professional development programs that help teachers integrate technology into the classroom and encourage them to move from a teacher-centered classroom to a student-centered one.

 

Do the programs focus on technology?

 

The focus is integration of technology using the school's curriculum. [The programs] don't write content or teach teachers or students how to use a mouse. The focus is on the pedagogy, integrating technology and enhancing 21st century skills.

 

What needs do these programs fill?

 

School districts and states don't always create their own teacher-professional development. It is very expensive and time-consuming. Depending on where you are, it is sometimes cheaper to bring in an outside entity. Our stuff is free. It doesn't cost anything for the curriculum itself. That is a huge savings right there.

 

We use a "train the trainer" model -- they use their people once we've trained them. We're offering them something that would cost hundreds of thousands of dollars to develop.

 

We always have third-party research on our programs. This allows us to improve them as well as go to a government and show them that it is research-based, evaluated program.

 

How do you see the pros and cons of technology in the classroom?

 

[Technology] is a blessing because it allows so many things. If I was a kid in the classroom today I could collaborate with other kids across the world, I could ask experts things. Before, I may have had a textbook that is 7 years old and out of date.

 

When I was teaching about the solar system and water on the moon, 2 days before in the paper I saw they discovered water on the moon. Of course, the textbook said there was no water on the moon. Let's get them to have all the same resources as every other kid in this city, state and country has. Let's make it fair. Let's let these kids succeed also.

 

[Technology] is also a curse if it is used in the old manner of teaching such as memorizing and spitting back or copying and pasting.

 

 

Related stories

 

Independence Day Fireworks Spectaculars Rely on Computerized Choreography

 

JimSouzaShowDesign.jpgFifth-generation pyrotechnician Jim Souza of Pyro Spectaculars uses a computer to choreograph "pyro-musicals" that fuse modern technology with the ancient craft of fireworks. Courtesy of Pyro Spectaculars by Souza. (Flickr photo)

Lighting fireworks manually is a relic of the past for today's big-budget pyrotechnic shows. The exploding shells themselves are still made by hand, but modern technology has found its way into other key functions of the centuries-old craft.

 

The displays of today require computing horsepower to choreograph the precise timing, height and direction of state-of-art aerial explosions that deliver whiz-bang effects, even spell out words and are often set to music.

 

"What's changed over the years is the ability to design these shows with a computer," said Jim Souza, CEO and president of Pyro Spectaculars, which has wowed crowds at the Olympics, the Boston Pops Fourth of July Celebration and Chinese New Year celebrations overseas. "These displays require perfect timing, precision, and for shows like the Golden Gate Bridge's 75th birthday celebration, tight synchronization with music broadcasting over a radio station."

 

Technological advancements make it possible to create fireworks displays of increasing complexity. The more sophisticated capabilities challenge pyrotechnicians to continually dazzle audiences with so-called "pyro-musicals" such as Macy's Fourth of July above the Hudson River in New York City.

 

goldengatebridgeSilver-Falls.jpgA silver curtain fireworks display was used at the 75th anniversary celebration of the Golden Gate Bridge in San Francisco. Courtesy of Pyro Spectaculars by Souza. (Flickr photo)

"The big shows now require computerization for the choreography," said Julie Heckman, executive director of the American Pyrotechnics Association. "Shows are more elaborate and more visually appealing than before computers were used."

 

Computers were first used around 1978 to remotely ignite fireworks, according to Heckman. Back then, a computer set off an electric match to contact a fuse, but in the 3 decades since, they have been asked to do a lot more.

 

"These days it's like putting on a movie production when companies design a show," Heckman said. "These guys sit down at a computer, select the music and have an electronic inventory of effects that knows what shell will burst at what height, how long the firework will last -- it's quite amazing."

 

Souza scripts, designs and visualizes pyro-musicals using software from Infinity Visions on a MacBook Pro running an Intel Core i7 processor. Before a show can be choreographed, the properties for each shell must be logged, including burn time, lift time and effects. Microchips embedded in the shells trigger the fireworks to explode at a specific height, in a particular direction and with millisecond precision.

 

"Timing chips are something I know the companies are working on," Heckman said. "These chips will accelerate wider use of letters, for example. An 'M' for 'Macy's' could look like a 'W' if turned the wrong way. We're getting there, but [we're] not there yet."

 

MacysGoldenMile.jpgThe Macy's "Golden Mile" fireworks effect features a shower of sparks stretching a mile across the Hudson River will grace the New York's largest Fourth of July celebration. Courtesy of Pyro Spectaculars by Souza. (Flickr photo)

Though letters and numbers have appeared in larger shows over the past few years, often for countdowns and spelling out the patriotic abbreviation in Lee Greenwood's anthem, "God Bless the U.S.A.," Souza said the industry hasn't yet perfected the pyrotechnic equivalent of skywriting.

 

"We call it ballistic fireworks technology, and it takes a number of shells to break in sequences to get it correct," Souza said. "Maybe that's something for next year's [Macy's] show."

 

The Macy's show is one of 403 Souza's company will stage this Fourth of July and the department store-sponsored spectacular promises to showcase the latest innovations.

 

"Thanks to fusing the ancient art of fireworks and modern technology, you'll see cascades falling out of the sky that are even more vibrant and beautiful to watch," Souza said. "We'll have something new that I call eclipse shells that go up in the sky and you see half appearing in red, the other in green, then light up in another color like orange, then light up to complete a full circle -- it's kinda ghost-like, but it also looks like an eclipse."

 

For all the modern technology that goes into today's fireworks shows, Souza believes there's one instrument that sparks everything.

 

"It all starts with the basic -- an imagination -- which isn't a technology at all," Souza said. "It's the freedom of the brain, and with regard to our Fourth of July shows, what better way to celebrate this freedom than to honor America's birthday through fireworks."

 

 

Related stories

 

Touchscreens Have Been Around for Decades, but Advancing Technology and Declining Costs Have Brought Them into the Mainstream

 

Until fairly recently, touch computing was an example of the sort of plausible technology you'd expect to find in science fiction portrayals of the not-too-distant future. Today, largely due to the proliferation of smartphones and tablets such as the Apple iPad, sophisticated touchscreens are a common fixture in our digital lives.

 

The technology and software that power touch-enabled devices have become more advanced and more ubiquitous, from laptop screens and desktop all-in-ones to interactive retail displays and tabletop computers. To understand how touch technology works, requires a look back to the basics of how touchscreens evolved from primitive interfaces to chic and powerful digital accessories.

 

Simple Beginnings

 

It might be surprising to know that touchscreens have been around for decades, although they aren't comparable to today's technology. CRT (Cathode Ray Tube) displays that react to the touch of a human finger or a stylus were developed in the late 1960s, for instance. These early touchscreens had one significant limitation: they were only responsive to a single point of contact. This was sufficient for, say, taking money out of an ATM when you only needed to use one button touch at a time.

 

More complex interactions, such as typing, require multi-touch technology, which was only developed within the past 20 years. But to understand the technical details of how multi-touch screens work, let's first examine its single-touch predecessor.

 

7461341506_7a29498006_o.gifResistive touchscreen technology employs narrowly separated layers of conductive material that react to the location of the contact. (Flickr photo)

Types of Touchscreen Technology

 

A touchscreen relies on sensors to pick up the location of a pointing device, whether it's your finger or a stylus. Early touchscreen research soon developed along two paths, with physical devices that used surface pressure to detect location, and using electrical current to translate touch into function.

 

Physically, touchscreens aren't meant to be used like a mechanical button -- you touch, not press the button. In order to register contact, then, some touchscreens use resistive surfaces.

 

Resistive technology employs narrowly separated layers of conductive material that react to the location of the contact. In a sense, then, you really are "pushing" a button, insofar as you're completing a circuit. Because resistive sensors aren't very precise, they're cheap and easy to implement in kiosks and terminals that only require a user to tap large buttons.

 

7461341594_fbac9d1373_o.jpgCapacitive touchscreens use an electrostatic field to continuously sample the screen surface for movement and relays that information to a processor that can interpret it. (Flickr photo)

Capacitive touchscreens, on the other hand, take the principle behind resistive touchscreens and add precision. Here, the surface itself is electrically charged in the form of a clear conductive coating. When contact is made, the charge is interrupted and a precise location is registered. Because the only "moving parts" are electrons, capacitive touchscreens last longer and are more durable than their resistive counterparts. The drawback is that a stylus or other charge-neutral writing instrument isn't going to register, which means capacitive touchscreens rely upon your body's natural electrical charge in order to function.

 

The actual capacitive touchscreen surface begins with a coating of anti-glare material applied to a transparent protective cover. The cover is treated to guard against scratching and is bonded to the capacitive layer(s). The layer that does the actual touch sampling can use either self capacitance, which features an array of integrated circuits and electrodes that sense points of contact, or mutual capacitance, which uses two distinct layers to carry and sense current. Some touchscreen systems, such as the iPhone, use both capacitive methods to detect contact. Only below this layer are the actual LCD display and any backlighting systems.

 

How It Works

 

When you use a touchscreen, the effect appears instantaneous. You can give thanks to powerful software for this illusion, for even the simplest interaction with a touchscreen is, under-the-hood, a complicated process. First, the software must locate where you are on the screen before refining your position. This refinement looks for pressure points (not to be confused with resistive touchscreen technology), eliminates any noise surrounding the input (your finger is blunt, unlike a narrow stylus) and then plots this data to a grid to calculate where it thinks you meant to touch the screen. More complicated movements are compared against a library of known gestures to understand your input. The device's processor then relays this information to the application level of the software and finally displays the intended result on the display. All this is done in less than a blink of the eye.

 

Multi-Touch and the Market

In principle, some single-point touchscreen technology is applicable to multi-touch, which allows you to interact with multiple points on the screen simultaneously and can detect complicated motions and gestures. In addition to the capacitive technology, some touchscreen devices utilize infrared, sonar and optical waves to sample surface input. But other reasons exist for the proliferation of multi-touch technology beyond technological advances and better software.

 

Pros and Cons of Touchscreens

 

Touchscreens, as with everything, have pros and cons. Here's a breakdown:

 

The Good

  • Interactivity, simplified interface
  • Mobile devices benefit from versatility of touchscreen UI (user interface)
  • Resistive touchscreens support any pointing device

 

The Bad

  • Learning curve for gestures, keyboard typing awkward
  • Surface can become scratched or dirty, hampering performance
  • Capacitive touchscreens only support charged pointing devices (your finger)

 

 

Related stories

Payment Company Executive Sees Mobile Technology as Opportunity to Improve User Experiences and Security

 

7410716076_4af141e089_b.jpgJames Anderson, senior vice president and head of the mobile products division at MasterCard, is vice chair of the Near Field Communication Forum, an industry group that develops specs to ensure interoperability among devices and services. (Flickr photo)

James Anderson is senior vice president and head of the mobile products division at MasterCard. Over time, the well-known payment industry company has deployed new technology to keep pace, moving from manually operated machines and carbon slips to telecommunications-based point-of-sale transaction processing and rolling out contactless payment technology with chip-level security.

 

Driven by smartphone and tablet adoption, a growing number of the 23 billion transactions MasterCard processes annually through its worldwide network are mobile. In anticipation of growing demand for m-commerce, the company has been working on mobile for more than a decade, pursuing secure common platforms that will enable payments on multiple mobile devices.

 

Anderson sat down recently to talk about how he sees mobile technologies changing people's lives.

 

How does one of the world's oldest credit card companies keep pace with the rapidly changing mobile world?

 

We've been working on mobile for a while -- 10 years or more -- and what we're trying to do is leverage the fact that consumers see the mobile device as incredibly important to them and they also see payment as incredibly important. We want to bring those two together, to deliver great user experiences and obviously grow our business.

 

Mobile is just a very common phenomenon now in many parts of the world. We're seeing merchants who are getting 10, maybe 15 percent of their transactions coming off mobile devices.

 

Everybody talks about multiple connected devices and I subscribe to that theory. I just look at the user cases that consumers have and even the pocket sizes. You know, there's small pockets and there's big pockets and people want screens that fit into both of them. I just see the type of behavior people engage in is different and the type of screen size that they need is therefore different. So I think that at least in the developed markets we're going to have the luxury of multiple connective devices to get our tasks done.

 

7410715952_7146bff862_b.jpgWith the Google Wallet smartphone app installed, shoppers with NFC-enabled phones can pay by waving their phone near a NFC terminal and entering a PIN. (Flickr photo)

In emerging markets, there may not be the "luxury" of multiple devices, but is there demand for mobile payments?

 

In the emerging markets, we obviously have card products, but we do see the ability to leverage a mobile device, which [has] penetrated extremely successfully into emerging markets. We do see the ability to make that into a payment device as an extremely compelling way for people to first interact with our network. So we've launched different programs. We have a partner program where a number of vendors have come together and are implementing MasterCard products targeted at emerging markets.

 

I think that's going to be a big phenomenon starting in 2012 and going on for the next couple of years. As we see initial versions of mobile money in emerging markets really blossom into full-functioning payment solutions for consumers, we think there's a great story there. This really is a development activity for markets that can eliminate a lot of friction from their economies by leveraging the convenience of electronic payments.

 

What security issues arise with more consumers going mobile?

 

Security is part of what we design into all of our products and all our services, and it's the same for mobile as it is with [a physical] card. Because of the scale in which we operate, we really try to build on common platforms in order to get a big, mass adoption. For example, we built PayPass [MasterCard and Intel are collaborating to improve security and the consumer experience on a variety of emerging payment technologies, including PayPass], which is our contactless technology, on top of our chip technology, and that's a very secure-by-design technology. We actually have a secure element which is a chip that's actually temper resistant to product the payment application that the customer or the bank has put on there.

 

Technology is absolutely critical. What's exciting is we can actually communicate with the mobile device because it is an interacting device. We can actually give the consumer evidence of the security. We can really reinforce the value of the product to them through different things and that's what we've built into a lot of our products.

 

What do you see as the next significant technological step for the payment industry?

 

The nature of payments is it tends to move a little slower than other technology trends. I think the big trend in 2012 is probably something that we started working on in 2006, which is NFC. Near Field Communications is a radio technology for short-range communication. We run payments over it but it can run many other services, and after running a lot of investments by a lot of people it seems like this year we're going to see a lot of NFC devices and we think that's going to be in a lot of NFC services, one of which is payment. We think it's going to be very compelling for consumers. [Anderson is vice chair of the Near Field Communication Forum, an industry group that, among other things, develops specifications for NFC technology that ensure interoperability among devices and services.]

 

7410716016_2505b67c47_o.jpgMasterCard PayPass replaces the magnetic strip on credit cards with a microchip and can be tapped on a point-of-sale terminal rather than swiped -- the technology can be deployed in phones, key fobs and plastic tags. (Flickr photo)

How do you see mobile changing the shopping experience?

 

There's a tremendous opportunity to use mobile in retail. Look at the retail experience. On the Internet now you can find out product availability instantly. You go to Zappos because you want to find shoes. You don't ask somebody if there is a 9 1/2 and 20 minutes later they get back to you. Instantly the webpage refreshes and tells you whether they've got availability in 9 1/2s.

 

That's not available in the physical world today even though somewhere there's a system that knows whether they have 9 1/2s in the stockroom. Bringing that out and exposing that to the consumer, in my mind can really reinvent the brick-and-mortar retail experience. It has a lot of legacy infrastructure that makes it hard, but the opportunity is profound to make the retail shopping experience better, and the only really viable device for doing with the consumer is a mobile phone.

 

What are roadblocks to wider adoption for mobile payments?

 

We're the payment piece at the end of a process. Our job is to make the payment piece almost disappear and be frictionless for our retail customers -- to deliver a better user experience. So last year we launched Google Wallet in conjunction with Google and Sprint. So, that's the first wave.

 

For us at MasterCard, scale really matters. We have 1.8 billion cards. So we deal in very, very large numbers of cards and customers, and transactions. And so, the mobile team is working on how we scale up these value propositions. We've done a lot of work in terms of the underlying technology to make it open and make it useable by anybody. We think we've created the right foundations. We've done a lot of partnerships with people like Google to get the initial deployments and now we're looking to scale it out so we can really get mass market. Handsets are the critical piece of the puzzle. If somebody can't get the handset with the right capabilities then however great our product is they're not going to use it.

 

Where does the cloud fit in?

 

There's a couple miracles of modern life, you know, one of which is that I get off a plane someplace, I take out my MasterCard and I check into a hotel. And they let me in. And the only reason they let me in is because I've got this small piece of plastic. And why does that work? It works because of the power of the cloud. That merchant is able to essentially connect back to the issuing bank 4,000 miles away and basically address the risk of the fact that I'm walking into the hotel and they need to get paid. That's basically enabled by the MasterCard cloud if you want to use that terminology. So, we're big believers in a cloud that's going to be applied everywhere. Clearly, the idea of remote resources in a server accessible anywhere is an amazing, new powerful concept. MasterCard has been doing it for 30 years and we're excited to be part of the future, and we'll take care of the payments.

 

 

Related stories

 

Smaller, Cheaper Sensors with Lower Power Demands are Creeping into Every Aspect of Our Lives ... Maybe Even inside Our Heads

 

7394399986_e9907a5f06_o.jpgJoshua Smith holds nearly 20 patents, including one for a capacitive sensor that detects passenger position for airbags that has been deployed in Hondas since 2000 -- if the passenger's head is too close to the side airbag, the sensor will disable it. (Flickr photo)

Sensors are everywhere around us from smartphone touchscreens to elevator buttons to thermostats. These sensor devices, which receive and respond to a signal, are a linchpin of the so-called "Internet of Things." As they become smaller, cheaper and require less power they are being deployed in more places that we encounter every day -- whether we are aware of it or not.

 

Joshua Smith is a researcher on the cutting edge of sensor technology. Smith holds nearly 20 patents, including a capacitive sensor he developed as part of his MIT doctoral thesis that detects passenger position for airbags -- if the passenger's head is too close to the side airbag, the sensor will disable it. The technology has been deployed in Hondas since 2000.

 

Today, Smith leads the Sensor Systems Laboratory and research group at the University of Washington, where he is an associate professor of Computer Science and Electrical Engineering. The former principal investigator at Intel Research Seattle, where he led robotics projects, is now focused on inventing sensor systems, devising new ways to power them and developing algorithms for using them.

 

Recently, Smith discussed the state of sensors, current and future projects, and the concerns that arise as sensors become more commonplace.

 

Sensors are all around us, but that doesn't mean people are aware of them. Where do we encounter sensors in everyday life?

 

Sensors are kind of creeping up on us. They're growing in importance and finding themselves in places we don't think about.

 

At a high level, there are sort of these two worlds. There's the world of information and computation and even mind and thought, and then there's the physical world of matter and energy and stuff like that. Sensors are the place where those two things touch. That's one of the exciting things about sensors. In terms of your daily life I think sensors are getting more and more common. Your phone has a ton of sensors now. The touchscreen itself is an interesting kind of sensor. I remember reading Walt Mossberg's review of the iPhone and he was writing about the addition of a compass in the phone and he said, "That's ridiculous. I don't need a compass!" But he didn't realize that the point of the compass was not so you know which way is north when using it in compass mode. It's a whole lot of other things, so when you combine a compass with an accelerometer, when you do navigation it can show maps to you in the right orientation.

 

7394400122_a2d2f9c391_b.jpgAn actor portrayed Joshua Smith shakes hands with a robotic hand in an Intel ad. Smith led robotics projects during his time at Intel Research Seattle. (Flickr photo)

Another area that you don't see in everyday life but I think is exciting is the possibility that sensors will help create better robots. So if you think of a sensor as a window through which a digital system or computer can look at the physical world it gets more exciting if the computer cannot only look at the world through the sensors but can actually start to physically change the world. Robots are the way to do that so sensors can help make robots possible.

 

What are the latest developments in sensor technology?

 

Sensors are getting smaller, lower in power, lower in cost. Another big thing is that computing is getting more capable and lower in power. We have been building on is the fact that as Moore's Law continues, the energy efficiency of computing has been improving. That means for the same amount of energy we can do more instructions per microjoule. What that means is it's starting to be possible to power sensors wirelessly and at greater and greater range and greater distance as time goes on. That's an exciting opportunity. The type of things we've been working on is harvesting energy from RF signals from sources like TV towers and cell phone towers. We've done it successfully.

 

We want to collect energy in order to run sensors perpetually so you can do a lot of different things. One of the projects we're working on is powering a new type of low-power chemical sensor. This would be for outside a chemical plant. The sensors would monitor if chemicals are being released in the air, checking the air quality and relaying the information. That's one example. Another is searching for a parking spot. The sensor could be mounted to every parking spot and from your phone or another device you can find out where the nearest open parking spot is.

 

7394505846_5764f04d49_o.jpg"You think you like your cell phone now? Imagine when they can read your thoughts," says Joshua Smith of work on brain implants and near-field communications. (Flickr photo)

What about sensors inside the human body?

 

We have a center here called the Center for Sensorimotor Neural Engineering. We're trying to make brain implants that are wirelessly powered and these can be used by people who have diseases that cause seizures. Today, temporary implants are used to measure when the seizure is happening, but they have to be removed after a few days. If we can wirelessly power those, then perhaps you could have something that looks out for problems one at a time and maybe stop the seizure as it is happening, and you can use the same technology potentially for brain computer interface.

 

Another slightly crazy thing that we're working on is trying to create brain implants that are powered and read by near-field communication. What that means is your cell phone could talk to the implant electronics in your brain and power it. Who knows what the possibilities are, but if there's going to be some type of implant in your brain then why not have the cell phone be the thing that powers and reads it. You think you like your cell phone now? Imagine when they can read your thoughts.

 

The possibility that we can actually implant some of these things is mind-blowing because we can wirelessly power sensors, make them sustainable and keep them in the body for a long period of time, potentially. I think that's really exciting.

 

What's the next big development your see for sensor technology?

 

One big trend I see is the possibility of solving the power and connectivity issue. A lot of people worked on sensor networks in the 1990s, and I think one of the big remaining problems that's preventing the large-scale adoption of that stuff is the power problem. If we can do wireless power and connectivity to sensors I think it makes wide-scale deployment of sensors feasible when it wasn't before because of these practical reasons. It was just too hard and expensive to power them.

 

 

Related stories

 

As 22-nanometer Chips Reach the Market, a Longtime Industry Observer Considers the State Of Semiconductors Today and What Lies Ahead

 

7351872630_1152f1db95_b.jpgDan Hutcheson, semiconductor industry analyst and CEO of VLSI Research holds up a 450mm silicon wafer. (Flickr photo)

Dan Hutcheson has been following the semiconductor industry for more than 30 years and ranks among the foremost independent authorities on chip making and the economics of innovation. In the early 1980s, he developed the first factory cost-of-ownership models, and more recently advised the White House Council of Economic Advisors on innovation.

 

Hutcheson, who is CEO of VLSI Research in Santa Clara, Calif., has seen scores of major factories around the world. He has a detailed understanding of the industry supply chain and equipment manufacturers, and has a record of success predicting trends based on sophisticated economic models. Amid swirling debate over the strengths of fabs versus fabless chipmakers, shrinking geometries and node transitions, Hutcheson shared his perspective on the industry today, the future of Moore's Law and the transition to 450-milimeter wafers.

 

How do you see the industry today?

 

We've gone through 2 decades when foundries were growing faster than the industry. Then the fabless industry was growing faster than the industry, and the IDMs [integrated device manufacturers] were going to go away. Maybe 8 years ago, I predicted that as we moved below a hundred nanometers, design and manufacturing were going to become integrated again, that the key strategic thing that made the foundries possible was going away and that you needed a lot tighter coupling between manufacturing and design. You're seeing that now.

 

It used to be that the foundries were neck-and-neck with the IDMs on process technology and now they could be a generation behind. Certainly a generation behind Intel in the logic space. They're also behind the memory IDMs. Intel is already racking 22-nanometer, and they're still trying to get to entitled deals on 28 nanometers.

 

If you look 5 years out, how will the market landscape for fabs, foundries and process tech be different than it is today?

 

If you look at the foundry industry, there's only one company in that world that has been consistently profitable. I think what you're going to see is more consolidation in the foundries. It's going to narrow down to one, and they will become more collaborative with their fabless suppliers.

 

We've already seen that the foundries' share of the market is not changing in terms of share of non-memory. In the mobile communications chip market everybody knows Qualcomm is No. 1, and you say, "Well, who's No. 2?" Most people don't know it's Intel. There's this belief that Intel can't compete in mobile, and yet they're No. 2 in the space already.

 

7166662489_efaf2e3b95_b.jpgDan Hutcheson has been following the semiconductor industry for more than 30 years. (Flickr photo)

How are the manufacturing and process worlds changing as we scale to geometries, materials and manufacturing techniques that are increasingly complex and cost prohibitive?

 

As you put more and more transistors on a chip, and you get that ability to put a billion transistors on a chip, you've got the entire system there. So now you're putting all this other system architecture onto the chip itself. That means that the whole surface of the chip is very different.

 

In the '90s and the 2000s, if you followed the design rules and the design guidelines, you didn't have to worry about manufacturing; you could just make your design, and they pretty much worked coming right out of the box.

 

Now, design can be perfect, but if you haven't dealt with all the lithography hotspots, all the various issues of the actual manufacturing of a chip, because everything is so close together, it's very difficult.

 

You mentioned complexity and the challenges they present for manufacturing. How important is process technology in today's competitive environment?

 

It's critical because of the complex supply chain between companies that are foundries and companies that are fabless -- there's inter-competitive issues.

 

One of the advantages foundries had in the '90s and the 2000s was that it was cheaper to produce in Taiwan. Labor and engineering were a higher percentage of the total cost, and many IDMs were, frankly, inefficient in the way they ran their fabs. Today, the IDM has a huge advantage because there's no sales, marketing or purchasing interface between the designers and the process engineers. They both can work together and solve problems, and this is why they're able to do it faster. They have shorter learning loops. They have shorter decision cycles, and that means they get answers faster.

 

The other thing is the fabless company doesn't necessarily want to give the design to the foundry. The IP is pretty leaky in the fabless/foundry space and that means that the designers don't necessarily want to give a leading-edge design to the foundry. So the foundry runs their process off of SRAMs and test chips and stuff like that.

 

Some say it's yield related, some say it's investment related. What do you make of the 28nm capacity shortfall we've seen in recent headlines?

 

If you have a yield problem, you also have a capacity problem. In reality, it was a demand problem; there wasn't enough demand for 28 nanometer. There wasn't enough demand because the yields were so poor. Now it's a capacity problem because they've solved most of the yield problems, so now there's this huge flood of demand for 28 nanometer.

 

Looking ahead, how will the rate of materials innovation change as we get to smaller nodes like 14nm and 10nm?

 

I think the interesting thing about materials is that we've gone through an era of massive materials innovation to deal with the limitations of devices. Tri-Gate resets the whole game because now you're back to transistor physics, and it's not a materials game. It's a structural game, and you're going to a much better structure that gives you a lot better device performance for lower power consumption. It really is -- I hate to use the word -- a game-changer. You've seen almost everyone in the last year realign toward a thin-set type of device. The huge breakthrough that Tri-Gate represents in terms of technology. It's like going from prop engines to jet engines.

 

Why do you think Tri Gate is a game-changer?

 

It opens up the world of semiconductors into the sub-20nm future, and getting down to 10-[nm] and sub-10nm. The biggest advantage is that it's a much more efficient device. I could say, "It's like going from a gas-engine car to a hybrid" in that it consumes a lot less power, but the bad thing about that example is that hybrids aren't known for their performance. So it's like going from a 4-cylinder to a V-12, and getting all the advantages of a hybrid at the same time. It's a unique technology because these things just don't come along very often in history.

 

Do the challenges in sub-micron mean we're nearing the end of Moore's Law?

 

We used to measure everything in microns and now we're measuring everything in nanometers. That was when you started seeing so many companies start to fall behind because there were just so many technical issues. There was a time when I could come to the equipment suppliers and get a total solution and actually be in semiconductor manufacturing. I could buy my design tools, get my IP and it all worked together. Once we crossed that [nanometer] barrier, we ran into all kinds of problems.

 

So far, Moore's Law is continuing to go on. What makes Moore's Law so powerful is that he [Intel co-founder and former CEO Gordon Moore] describes that you get twice the components every year. It was every year back then [1965]. In '75 he changed it to every 2 years, but you get twice the components per period of time, and it costs you roughly the same to get 2x the components. That's what drove the industry, because it meant that you would always get better and better electronics and it wouldn't cost you more. What we saw was the cost of computing just went down orders of magnitude.

 

The interesting thing that's happening, though, is people still want more density, they want more performance. On their cell phone, they want it to do more computing tasks, they want video, they want streaming, and they want the battery to last for a day, maybe 2. That means that you still have to strengthen transistors, and you still have to put more density into them. We're moving into a new world that wasn't in the original papers that Moore wrote. We're moving from density drives cost to density drives ability to pack more transistors into a given area and have a lower power consumption. So I get more performance for the same power. Whether we call it Moore's Law or something else, it's going to continue to drive this industry for at least another 10 years.

 

I tend to believe that Moore's Law is an article of faith. If you try to apply a scientific method to predicting the end of Moore's Law, you can always show what you can't do in science. And the problem that you can't show is how you are going to get around that. Who's going to come up with the next great idea? And usually, it's one or two people on the whole planet that come up with the next great idea.

 

Look at the tablet phenomenon. People have played around with tablets and they went nowhere for 20 years. Then Apple cracks the code and they introduce the iPad. It takes off and it's created huge new markets for semiconductors. People say, "Oh, it's killing the PC," right? Well the fact is, the PC has continued to grow, the PC industry has continued to grow ... even [for] Apple! Apple's selling far more Macs today than they sold before they introduced the tablet.

 

Nvidia, TSMC and others have called for the industry to move more aggressively to 450-millimeter wafers. What's driving the speed of this transition?

 

The timeline we see is really toward the end of this decade. There's just so much that has to happen; it's not like you blink an eye and you're at 450mm wafers. Even back in the days where everybody made their own tools -- like Fairchild, they made transitions maybe, once every year or two in wafer size. The 300mm wafer took a good decade to bring to manufacturing. We've seen that time stretch out because there's so much technical work, and there's also a lot of investment that has to be made because the silicon manufacturers have to invest in the crystal-pulling furnaces and all the technology it takes to make a 450mm wafer and polish it.

 

 

Related stories

 

Computer-Generated Filmmaking Races to Stay Ahead of Advancing Audience Expectations

 

KateSwanborg.jpgKate Swanborg, head of enterprise marketing for DreamWorks, views technology as an investment in being a "differentiator in our market space." (Flickr photo)

It's an exciting time at DreamWorks Animation. "Madagascar 3" is being released in theaters and the studio has many other productions in development. From "Shrek" to "Kung Fu Panda," what audiences see on screen wouldn't be possible without advanced computing technology. It's so vital that Kate Swanborg, head of enterprise marketing for DreamWorks, characterizes technology as an investment in being a "differentiator in our market space."

 

In her role overseeing technology partnerships for the studio, Swanborg is responsible for providing the technological foundation that artists and engineers use to produce images, characters and stories that, in her words, "simply could not be produced any other way." Recently, she shared her thoughts on the essential role technology plays within DreamWorks and how audiences are becoming producers.

 

How important is state-of-art technology at DreamWorks Animation today?

 

Technology is ubiquitous in every aspect of our filmmaking, whether it's designing a character using the most high-powered workstations, rendering the entire film throughout our global data centers or reaching out into the cloud for cloud computing. Then of course, [there's] actually getting the films out into the marketplace, whether it's digital distribution, digital to the theater, best-in-class 3-D. We rely on technology to ensure that the audiences are getting the most premium experience possible.

 

We're seeing more complex, realistic animation with each new film it seems; does that parallel how audiences are evolving?

 

It is amazing. Audience expectations keep growing. Every movie takes them to a new level of visual richness and storytelling, and so it's very, very important for DreamWorks to stay not just on the cutting edge, but on the leading edge of that. Audiences want characters that they can fall in love with, but they want that to happen in an environment that feels real and inviting.

 

The key to really believing in a character is having that character fully realized. It's not full realism. We can do live action for that. At DreamWorks Animation it's about creating a character that has a facial system that has so many nuances in it you can actually see when they're nervous or concerned or excited. You can watch them anticipate a moment, and all of that has to do with the underlying technology that allows our animators to control literally thousands of animation control points in just the face alone. That is what will make an audience member fall in love with a character.

 

Where does mobile technology fit in?

 

The biggest way that we see mobile technology devices affecting DreamWorks is in content consumption. People want access to their media anywhere, anytime, on any device, but the key is that we invest a lot at DreamWorks Animation to create visually rich characters as much as possible. We want that media to look great, whether it's on a smart phone or a tablet. So therefore, we are working with all of the technology companies to ensure that their devices are providing consumers with the most premium experience possible.

 

So the audience experience is no longer contained within a theater or a living room?

 

I think that the most exciting thing we're seeing right now is the interconnectedness of all of the different types of technologies we have. No longer is it just a phone or just a television or just a computer. Everything is going to merge and, therefore, the distance that used to be between the creation of content and the consumption of content ... that distance is going to go away, and suddenly, the creation and consumption of content is going to become a much more interactive experience.

 

I think that one of the most exciting things that we're seeing is the idea that, as a consumer, I can actually create. I can go and start creating characters and imagery. Now, of course, at DreamWorks Animation we go and identify the best artists and engineers on the planet, and that talent is still critically important. But mobile technologies are really allowing the consumers to take those wonderful assets that are created and bring them into their whole lives and actually become producers in their own right.

 

 

Related Stories

 

Technology Analysts Say Spotlight Will Shine on Touch, Windows 8, Convertibles

 

Computex_Primary_sm.jpg

According to several industry analysts, new computing experiences will headline Computex 2012. Touch-enabled interfaces, Windows 8 and convertibles are expected to grab the most attention at the annual event, among the biggest for the tech industry, which officially starts Tuesday in Taipei, Taiwan.

 

"Touch is going to be huge at Computex this year," said Patrick Moorhead, principal analyst at Moor Insights & Strategy. "We're so close to [the release of] Windows 8. You're going to see touch on tablets, clamshell notebooks and on convertibles as well."

 

Rob Enderle, principal analyst at Enderle Group, sees Computex as the beginning of the Windows 8 coming-out period. "Most of the products that will be compelling will either be touch products or will be ways to add touch to existing offerings," he said. "We'll probably see more convertibles, products that are tablets and notebooks, and products that cross over the bounds of consumer and corporate."

 

Convertible.jpg"We'll probably see more convertibles, products that are tablets and notebooks," said Rob Enderle, principal analyst at Enderle Group. (Flickr photo)

John Jackson of CCS Insights said, "All eyes are going to be on Windows 8. How real is it? Where is it showing up in terms of underlying hardware architecture, what types of form factors and what types of use cases and experiences is it enabling? Ideally we see the hardware venders complementing some of these capabilities."

 

The classic feeds and speeds of hardware are going to improve, according to Moorhead, but he's interested in how devices can extend experiences across devices. "Connecting your phone to your TV in a wireless way and extending that experience to another display," he said. "Ironically, what's going to be hot for tablets is how they convert into a clamshell."

 

"Computex is traditionally a hardware-focused show," said IDC analyst Danielle Levitas. "What's going to be different this year is that there will be more emphasis on the services and applications that ride on top of the devices."

 

 

 

 

Related Stories

 

Founder of Popular Retro Photography iPhone App Finds Inspiration in American Innovation Driven by Our Increasingly Connected and Global World

 

7295396286_9f5ea1ae45_o.jpgLucas Buick co-founded Synaptic, a company that brought retro looking photography to smartphones with its popular Hipstamatic app for the iPhone. (Flickr photo)

Lucas Buick is the CEO and co-founder of Synthetic, a company best known for bringing the washed-out and color-saturated look of retro analog photography to the digital world. The company's photo app, Hipstamatic, the Apple iTunes Store's 2010 App of the Year, sparked a frenzy of photo special effects and social sharing apps for smartphones. It sold nearly 1.5 million downloads in its first year and the $1.99 app remains one of the top 100 most popular paid apps with more than 5 million sold.

 

In 2006, Buick and his friend Ryan Dorshorst founded Synthetic, a small design consultancy. Three years later, they moved the bootstrap startup to San Francisco and shifted focus to the mobile software. Today, Synthetic has a series of Hipstamatic apps, including Swankolab, Incredibooth and an online store called Hipstamart, where people can order prints, posters, T-shirts and items using digital photos. There's even an iPad magazine of curated content called Smack.

 

"What I love about magazines is larger trends being featured rather than the hourly trending topic on Twitter," Buick said in an interview with the San Francisco Chronicle.

 

Buick sat down recently to talk about how he sees mobile technologies changing people's lives.

 

How has being able to connect to the Internet wherever you go changed your life?

 

I think mobile technology is super interesting from the standpoint of everyday life. I'm walking through the streets and I need a great restaurant. I'm going to go to my phone and find it. I can read the news as it's happening. And when I'm sitting in line at the bank I get to shoot pigs with birds [referring to the game Angry Birds]. It's fantastic.

 

If you go back 5 years, we were all on future phones. I had my hot pink RAZR. Today we're going toward smartphones that are connected to the Internet and they're doing a ton of different things like photography and music. I now have many devices in one. I think we're going to continue down that path, but you're going to start getting specialty technologies as well. If want a camera that takes awesome photos, I don't need it to text message or to have a phone feature. But I still want it to be connected so that I can share my images with whoever I want.

 

I can't even imagine my job 5 years ago. The revolution in mobile technology and the hardware has completely changed everything. There's this brand new industry called apps with a "S." Ten years ago I was searching Excite with apps with a "Z," looking for software on Morpheus.

 

7264020932_3f3dab056c_o.jpgCapturing an image of Old Faithful Geyser in Calistoga, Calif. using an iPad. Photo was edited using Synaptic's SwankoLab digital darkroom application for the iPhone. (Flickr photo)

What sparked your desire to bring retro-innovation to mobile photography?

 

We've seen a big change in photography, going from analog to digital. That began 10 or 15 years ago, and now it's going from digital to mobile. In that transition, we've seen camera makers focusing on optical perfection. When we created Hipstamatic we were looking to capture emotion in photography, which has always been a great medium to do that. That's really where instant photography fell in 30 . . . 40 years ago. And Hipstamatic is trying to fill that void today.

 

Three years ago Ryan, my partner, and I were running a design studio. Our clients stopped paying their invoices. Smartphones were just coming out and the [iTunes] app store was new. We wanted to have software that did everything that our favorite analog cameras did, but we wanted it in a device that we always had on us.

 

How difficult was it to get started?

 

The biggest challenge for us in 2009 was to be profitable from day one. We didn't have investors. We needed to figure it out and make it work. We needed to have the business model and the product align from the beginning.

 

Content becomes meaningful when it's about personal storytelling. We're all using our social networks to share our stories with friends, family and sometimes strangers . . . all in the hopes of creating joy or even this feeling that others are missing out. Is my life cooler than yours? Our photo app helps people create beautiful images that they're proud of sharing.

 

What's in focus this year?

 

Modern technology and the Internet have us all so connected. Content is more readily available, and it leads us to discover new things, to be inspired and inspire others to create. This year is about globalization through creative thinking. We're seeing a lot of American innovation coming out and it's inspiring others. They are seeing innovation through the way we do things. I think that you're going to see a lot more "Made in America."

 

 

Related stories

 

The World's First Zero-Emission Supercomputer at the Thor Data Center near Reykjavik Draws Its Power from Renewable Resources

 

It does not sit in London, Tokyo, Beijing or New York. It is not humming along deep inside a corporate skyscraper.

 

No, one of the world's newest supercomputers -- and apparently among the world's greenest -- was recently fired up inside a low-slung grey building with red trim on a windswept plain outside Reykjavik, Iceland.

 

Iceland? As the global Internet build-out advances at a furious pace, the recently dedicated Thor Data Center in Hafnarfjordur is the world's newest example of an emerging trend in mega-computing.

 

Instead of locating powerful supercomputers near the companies or institutions that use them, why not build these machines -- then ship big data to them at the speed of light -- wherever on Planet Earth makes the wisest economic or environmental sense?

 

ThorDataCenter01.jpgThe Thor Data Center draws power from entirely renewable resources, such as the nearby Svartsengi geothermal plant. That is steam, not smoke, rising from Svartsengi, whose water is heated by Icelandic volcanic activity. (Flickr photo)

 

World's First Zero-Emissions Supercomputer

 

The supercomputer at the Thor Data Center is based on a cluster of 288 HP ProLiant BL280c servers. The Intel Xeon Processor L5530-powered cluster is comprised of 3,456 compute cores with 71.7 terabytes of usable storage, and pumps out 35 teraflops of performance.

 

While building and shipping the machine's parts to Icelandic-produced CO2, the machine -- and in fact all of Iceland -- is powered 24/7/365 by a mix of nothing but renewable hydro and geothermal power. To light up Iceland's electrical grid, no fossil fuels puff, smoke or burn.

 

ThorDataCenter02.jpgThe Thor Data Center sits about 15 km (9.3 miles) outside Reykjavik, Iceland. Fiber optic cables tie it to the rest of the world. (Flickr photo)

 

Power Costs Far Lower Than Europe

 

As a result, the cost of power to run the Thor Data Center is significantly lower than it would be in continental Europe. The latest figures show that electricity in Iceland, which costs about .05 Euros per kilowatt hour, is about 20 percent cheaper on average than power from the European Union's 27 other nations.

 

That is a huge number -- 20 percent. Today's newest data centers and supercomputers, even those running the most power-efficient, latest-generation enterprise processors, still inevitably slurp large amounts of electricity. So even tiny cuts in power costs can translate into big money.

 

"This really gives us a one-of-a-kind opportunity," said Kolbeinn Einarsson, the head of business development for Advania, the Icelandic IT company that built the Thor Data Center in conjunction with the National High Performance Computing organizations of Denmark, Norway, Sweden and Iceland.

 

Chilly Icelandic Climate a Plus

 

Einarsson says that power costs in Iceland are predictable and stable, unlike those in many other parts of the world. And Iceland's chilly but despite the country's name, not frigid -- temperatures make cooling a data center even more appealing.

 

Reykjavik's warmest month is June, when the average daily high temperature is just 60 degrees Fahrenheit (16 Celsius), and the low is 50 degrees Fahrenheit (10 Celsius). Reykjavik's coldest months are December and January, when the average daily low is a tolerable 28 (-2 Celsius).

 

ThorDataCenter03.jpgInside the Thor Data Center, which contains 3,456 Intel Xeon compute cores. (Flickr photo)

 

Three Undersea Fiber Optic Cables Move Terabytes of Data

 

But how difficult is it to move huge quantities of data into and out of Iceland, a remote island nation in the far north Atlantic just below the Arctic Circle?

 

Iceland is less remote than it might at first seem.

 

Three undersea fiber-optic cables currently tie Iceland with Scotland, Norway and Nova Scotia, Canada -- and provide bandwidth of about 9 terabytes per second. That is the equivalent of moving, in the snap of a finger, the entire printed collection of the U.S. Library of Congress.

 

Richard Curran calls that data pipe "essentially unlimited" for today's purposes. Curran, an Intel product marketing director, was on hand in April to figuratively throw the switch and help dedicate the world's first zero-emissions supercomputer. However, additional submarine cables will likely be laid over the next year, which would more than double that capacity, and allow data to flow to Iceland directly from the U.S. and the European mainland at blinding speed.

 

"This is a fantastic business to be in, since our society is accelerating its demands for computing power, and also increasing demands for clean power," said Advania's Einarsson.

 

Though they do not operate in Iceland, Google and Facebook, which run some of the world's biggest data centers, are among companies riding this trend. Both companies have built, or are expanding, massive data centers in the Pacific Northwest, where nearby hydropower keeps electricity costs among the lowest in America.

 

Today in Reykjavik, the Thor Data Center is crunching data for 17 separate research projects in physics, chemistry, materials science, genetics, economics and biotech.

 

And the work is reportedly happening at lower cost, and with lower environmental impact, than likely anywhere else on Earth.

 

 

Related stories

 

After Launching Google's Cafés and Running Eateries at Apple, John Dickman Is Remaking the Dining Experience at Intel

 

John Dickman knows a thing or two about feeding high-tech workers. Google's founders asked him to get their legendary Googleplex cafés off the ground. Then he jumped over to Apple.

 

Steve Jobs personally lured him into that café gig -- but then shouted at him shortly after he was hired that his pizza was "terrible," according to Dickman, who eventually moved over to Intel last summer.

 

Dickman's mission as program manager of food services is to support the site managers at each of Intel's 64 cafés worldwide in serving up delicious, convenient, high-quality and memorable experiences to employees. That means a laser focus on everything from menus and café layouts to customer service and the color of chairs.

 

A recently remodeled café in Santa Clara, Calif. and another newly opened in Hillsboro, Ore. are the first to be conceived under his direction and have garnered rave reviews from employees.

 

Neil Tunmore, director of Intel's corporate services, which oversees all of Intel's facilities and services including cafeterias, says food "is one of the things that helps retain employees."

 

Over the years, Dickman has developed a hands-on approach that harkens back to his first job in the food industry: dishwasher in a Marriott hotel. From there he moved to airline catering, where he taught flight attendants how to prepare food on planes. In the mid-'90s, Dickman transitioned to the corporate world and managed food services at a host of Silicon Valley companies, including Oracle, Cisco Systems, Yahoo and National Semiconductor, before his stints with Google and Apple.

 

Despite his wealth of experience, Dickman isn't afraid to roll up his sleeves and dive into managing the complex task of feeding thousands of hungry employees every day. Here's an inside look at how he spent part of a day recently visiting Intel cafeterias at its sprawling campus outside of Portland, Ore.

 

FoodDude01.jpg

 

10:24 p.m. -- Dickman starts his "day" by meeting with a manager of Intel's food service vendor at the Ronler Acres campus in Hillsboro. He wants to be sure the night shift folks are fed well. Twice a week, night shift employees -- most of whom work in the fab -- enjoy a high-quality meal at a low price. Tonight's menu includes grilled tri tip with chipotle-garlic spice rub, horseradish cream, herb roasted red potatoes, sherry vinaigrette, spinach and roasted tomato salad with a slice of pie -- all for a company-subsidized price of $3.95.

 

FoodDude02.jpg

 

11:20 p.m. -- Dickman checks in with Michael Haughey, the lead line supervisor at the Oregon café serving the night shift. In addition to pushing for high-quality food, Dickman is also dealing with the unique requirements of Intel's manufacturing environment. In China, for example, factory employees have a small window for lunch. Food for these employees used to be prepared way in advance -- leaving hot food cold and cold food tepid. Not very appetizing. Dickman has changed that system, telling café staff to store food in hot boxes or coolers so that when it's ready to be served, it's at the right temperature and tastes delicious. He says that "I want to show shift employees some love."

 

FoodDude03.jpg

 

7:45 a.m. -- Dickman grabs a cup of the darkest coffee, which he takes black, and gets on the phone with Intel's Gordon Wilson in Germany to review café plans in Europe and the Middle East, including the brand new café at a fab in Israel. More than 2,500 employees attended that café's opening in early April, which then had to be closed briefly so rabbis could sterilize the kitchen for Passover. Dickman said his team "always creates workarounds" for each site's particular cultural or religious traditions. Intel has kosher kitchens in Israel and halal kitchens in Malaysia.

 

FoodDude04.jpg

 

9 a.m. -- Back in the newest Ronler Acres café, Dickman grabs breakfast and then looks over the layout for a planned upgrade to another café at the same campus. Instead of separate serving and dining areas, the latter will be interspersed with different "restaurants." These will offer traditional fare (such as a salad bar), as well as some completely new options that include a "gastropub" and Asian fusion cuisine.

 

FoodDude05.jpg

 

10:45 a.m. -- What's for lunch? Executive Chef Ron Stewart takes Dickman into the kitchen of a Ronler Acres café to see what Intel employees will be feasting on today. Here, Chinese steamed buns -- cha siu bao -- are ready for their barbeque pork filling. "Over the past few years, people's palates have been educated," says Dickman. "People demand culturally diverse cuisine -- they want new, different flavors. It wasn't long ago that people thought sushi was weird stuff -- now it's standard."

 

FoodDude06.jpg

 

11:01 a.m. -- Inside one of the two massive fridges at the newest Ronler Acres café -- kept at about 34 degrees -- Stewart and Dickman check on the quality of locally sourced tomatoes. Both are proponents of using local ingredients for their better taste and smaller environmental footprint.

 

FoodDude07.jpg

 

11:34 a.m. -- Dickman meets with Brandon Bohling (left) and Eric Appel from Intel IT. The two are working with Dickman to create a desktop and smartphone app for all U.S. cafés. The app shows menus and lets employees rate and review each entrée. (Think Yelp.) Soon , the app will also include a "food-to-go" option so busy employees can order food and have it ready to be picked up -- or, for a small fee, delivered straight to their desk or conference room.

 

FoodDude08.jpg

 

12:42 p.m. -- Dickman and Damien Davis, general manager of yet another Ronler Acres café, watch a staff member prepare a smoked house-made quail atop a bed of greens. "This is one of our most popular stations," Davis tells him. One of Dickman's challenges is breaking away from Intel's "copy exactly" approach, which is how Intel cafés have long been built, with identical layouts and menus. "People eat differently," he points out. Arizona and New Mexico prefer spicier foods. "If you don't offer green chilies in New Mexico, you don't stay open." In one Santa Clara café he plans to open an additional tandoor to make the Indian fare even better -- 60 percent of employees there are of Indian descent. In such places as China and Malaysia, the bulk of the menu is local cuisine, along with a "Western option."

 

FoodDude09.jpg

 

3:17 p.m. -- Dickman and the executive chefs from Oregon's seven cafés sip a chocolate shake made by a vendor that specializes in shakes made from gluten-free oats. The vendor is hoping Intel will stock its products. Their verdict? They'll introduce a few of the products and see if they take off. In the future, Dickman wants to create a system in which his customers -- Intel employees -- help taste and vet new products.

 

FoodDude10.jpg

 

4:10 p.m. -- Dickman boards the Intel shuttle from Hillsboro back home to Santa Clara, where he looks forward to a quiet evening. His dinner at home that night? Steelhead trout that he grilled on the barbeque and placed atop an arugula salad. He eats out maybe twice a month, preferring the quality of what they can cook at home. "What can I say, I'm a foodie!"

 

 

Related stories

 

Research Findings Challenge the Conventional Wisdom about PC Touchscreens

 

TouchLaptop01.jpgParticipants in the study were given a laptop with a simulation of the touch-friendly Windows 8 environment. (Flickr photo)

Touch made smartphones easy to use. Touch turned tablet computers from a novelty into a multi-billion-dollar market. But touch on a laptop? That's a touchy subject -- if you listen to conventional wisdom.

 

Tapping away on a vertical screen all day could be "painful," causing users to develop aching "gorilla arms," some experts warn. Even the late Steve Jobs once said Apple had no plans to add touch to its laptops: "Touchscreens don't want to be vertical," he said at the MacBook launch in 2010. "It's ergonomically terrible."

 

Yet conventional wisdom didn't stop Gary Richman and his team at Intel's PC client solutions division from diving deeper into the concept of bringing touch to laptops.

 

"A Gut Reaction"

 

"I just thought that touch on a notebook might be kind of cool," Richman said. "It was a gut reaction on my part."

 

That feeling grew as the team grew to understand that Microsoft's focus for its upcoming Windows 8 operating system was "touch first, touch first."

 

"People were getting more and more accustomed to touch on phones and on tablets, yet here everyone was saying 'we all know' that touch on a vertical plane didn't make sense," Richman said.

 

So he enlisted team member Daria Loi -- a user experience manager -- to test the "no touch on laptops" rule in the real world.

 

"We felt that if we don't explore this and challenge the conventional wisdom, years from now notebooks will end up being your grandfather's PC," Richman said.

 

Testing the No-Touch Rule

 

Loi set up focus groups in Chicago and Milan in her native Italy. The focus group members were ordinary computer users from all walks of life. They were given regular laptops that had been outfitted with touchscreens and a simulation of the touch interface of the Windows 8 "Metro" OS.

 

Over a couple hours, the participants were told to go through a number of common computer scenarios, including formatting a picture, creating a PowerPoint presentation and even resetting the Wi-Fi connection. They had the option of using the touchscreen, the mouse, the track pad or the laptop's keyboard. Participants were allowed to make whatever choice best suited their needs.

 

"We weren't doing it to prove whether one mode was better than another," Loi said. "We had no preconceived ideas."

 

But the results were "astonishing," she said. More than 77 percent of the time, the focus group participants chose to use touch for the various tasks assigned to them.

 

"As soon as I reviewed my tracking documents, there was no ambiguity about users' strong preference for touch -- I was blown away," Loi said.

 

A Hit with Focus Groups

 

"Wow, this is easy!" said "Pamela" from Chicago. "It's almost reading your mind," she said of the touch interface. "You think of it and you do it. Just touch it."

 

Another tester, "Betty," said, "I like the scrolling because you can just kind of flick your hand and go quicker."

 

Loi recalled that one user, an older man, said he had never used a touch interface before.

 

"He was telling me how long it had taken him to learn how to use a mouse and a trackpad," Loi said. "It had been a very frustrating experience for him to learn how to use these devices. 'This is so easy,' he said. 'I'm amazed at how quickly I'm learning.'"

 

"Simpatico!"

 

In her studies, Loi said people approached the touchscreen in a variety of ways. They didn't try to touch a "vertical" screen, but instead adjusted the laptop screen so that it was at a comfortable angle. They often held the screen with two hands, using their thumbs to touch buttons on the bottom and sides of a screen. It was almost as if the laptop was a giant cell phone.

 

TouchLaptop02.jpgParticipants in an Intel user experience research study enjoyed using touch interfaces on a laptop. (Flickr photo)

One woman in the Milan focus group said that interacting with the notebook via touch was "simpatico."

 

"I found this very telling," Loi said, noting that "simpatico" is a term used in Italy to describe a level of affinity between people, not between person and technology."

 

In practice, it meant that touch had turned a boring, run-of-the-mill laptop from a "work" device to a "play" device that encouraged people to interact with it in a variety of ways.

 

What about the dreaded "gorilla arms?" When asked about fatigue, no one said that was a problem.

 

"I believe it's actually healthier for your wrist," said "Heidi" from Chicago. Pointing at the touchscreen she said, "Here you are moving other muscles. I think that's good for the body."

 

Worth the Higher Price, Testers Say

 

Though touch was rated overwhelmingly positive, Loi noted that didn't mean the participants were ready to ditch all other forms of interaction. Most of the testers preferred to enter text on a keyboard, for example.

 

One last point: When informed that a touchscreen would add to the price of a laptop, most of the testers said they'd be willing to pay "substantially" more for the feature.

 

One Chinese tester said that he would love to be the first in his office to have a touch Ultrabook, saying it would make him look tech-savvy and cutting edge.

 

Changing Intel's Plans

 

Encouraged by the results, Loi went back to the field and ran the study in Brazil and China -- each with familiar results; the majority of focus group participants loved touch on a laptop.

 

Intel's Mooly Eden referenced the study in his keynote address at this year's Consumer Electronics Show.

 

"People naturally use touch to swipe, expand and manipulate pictures and images directly on screen," said Eden, now president and general manager of Intel Israel. "And, so, touchscreen Ultrabooks will begin showing up in the market this year."

 

Armed with these new findings, Intel is now stressing that touch can actually be a competitive advantage on Ultrabook devices -- and the industry is listening.

 

Erik Reid, general manager of Intel's mobile client platforms said, "This research was very valuable to Intel's larger business objectives. It enabled us to talk to OEMs about a new compelling usage for Ultrabooks."

 

 

Related stories

 

Vietnam's Largest Solar Facility Joins Israel Installation as Second Intel Solar Array Outside U.S.

 

Solar_Vietnam.jpgThe solar array atop the Vietnam Assembly and Test Factory in Ho Chi Minh City is the biggest operating solar facility in Vietnam. (Flickr photo)

The largest operating solar power plant in Vietnam recently was installed at Intel's Saigon Hi-Tech Park facility in Ho Chi Minh City. The 1,092 high-efficiency photovoltaic panels on the roof of the Vietnam Assembly and Test Factory came online in April. The system is expected to generate about 321,000 kWh per year that will be consumed directly by the factory, reducing the flow from the local electrical grid.

 

The facility's opening coincided with the release of the U.S. Environmental Protection Agency (EPA) list of Top 50 Green-Powered Organizations, which ranks organizations that use clean, renewable electricity from a variety of sources including solar. Intel has topped the list every year since 2008. Other technology companies on the latest ranking include Microsoft (ranked 3rd), Cisco (16th), Dell (41st) and Google (48th). According to the EPA, Intel uses more than 2.5 billion kWh of green power annually, which comes from solar and other Green-e certified sources such as wind and geothermal.

 

Solar_Folsom.jpgThe solar installation at Intel's Folsom, Calif. location is the company's largest, sprawling across 5.5 acres and generating more than 1,000 kWh annually. (Flickr)

"In the past 4 years, we've 'overdoubled' the [Green power] purchases we've made," said Marty Sedler, Intel's director of global utilities and infrastructure. "Currently, we are buying 2.8 billion kWh annually and that is estimated to be more than 88 percent of our U.S. energy use."

 

Solar sites converting sunlight to electricity are located at 15 Intel sites within four states, Israel and, now, Vietnam. Sunlight also heats nearly 100 percent of the water used in the Bangalore, India facilities. Intel estimates that the solar installations at its facilities generate 5.5 million kWh annually.

 

The lead position Intel has established in use of green power is strategic according to Sedler.

 

"Long term, our efforts are intended to help spur the renewable energy market, making them cheaper and more available," he said. "This will, in turn, result in lowering the cost of power and reducing the overall carbon emissions from electric generation."

 

Intel's use of green power has increased significantly since 2008 when it purchased 1.3 billion kWh of green energy. By 2010, 50 percent of the company's U.S. power was from green sources and that jumped to nearly 88 percent in 2011. By contrast, Microsoft, which made its first appearance on the EPA list in 2012 and recently pledged to go carbon neutral in 2013, draws 46 percent of its electricity from green sources.

 

Though solar power fulfills a modest percentage of Intel's total electricity needs, solar installations provide tangible evidence of the company's commitment to renewable energy, according to Sedler.

 

"Solar is something you can see, touch and feel," he said. "With energy, we're not trying to find one single approach to sustainability. We take a portfolio approach and solar is part of that. But there's also conservation and efficiency efforts at all sites worldwide, LEED buildings, investments in green tech and making our products more energy efficient. As times change, we'll make changes to our portfolio, continuing to optimize the opportunities. Diversifying our energy supplies across the world will continue to be a priority for Intel."

 

 

Related stories:

 

A Young, Internet-Savvy Population and Government Investment In Educational Technology Boost Economic Prospects

 

2264352319_be3beb393a_b.jpgTwo young Muslim women in Istanbul compute while having lunch. Photo courtesy of Chris Schuepp. (Flickr photo)

Turkey was Europe's fastest-growing economy last year, expanding by more than 8 percent for the second consecutive year. Although that brisk pace is projected to slow this year, by about 3 percent, the government has ambitions to become one of the world's top 10 economies by 2023 when the Republic of Turkey will celebrate its centennial. To get there, the government is betting big on technology to educate the country's youth. Today, 65 percent of the population is younger than 24, and the nation's leaders see this as a competitive advantage that will drive Turkey's growth.

 

"Turkey is a very young society, where adoption of new things can be quicker than other societies," said Anastasia Ashman, an author and Berkeley, Calif.-native who moved to Istanbul in 2003 and recently returned to the U.S. "One early adopter can get a group or whole family into a new thing almost overnight," she said, adding that this behavior is driving quick adoption of computers, smartphones and Facebook.

 

The average Internet user in Turkey spends more than 32 hours online each month. The nation ranks as the third-most Internet-engaged country in Europe according to ComScore, after the U.K. and the Netherlands. "In fact, Turkey has the 12th-highest Internet usage in the world with more than 27 million users," said Aydin Burak, Intel manager for Turkey.

 

Those users have a voracious appetite for Internet content, consuming an average of 3,706 Web pages per month, more than any other country in Europe, according to 2011 ComScore data. Many of those pages are on Facebook, which accounts for more than 28 percent of the entire time Turkey's populace is online -- more than threefold the time spent on either Google or Microsoft sites.

 

7119315595_bed200a162_b.jpgWith more people able to afford a PC, sales grew from 1 million units in 2003 to 5.5 million units in 2011 and are projected to reach 13.8 million units in 2015. (Flickr photo)

People in Turkey "picked up Facebook to connect with others, to be part of the crowd and not miss out or get left behind," Ashman said, adding that's where they were seeing news and photos of what their friends and family did the night before." Facebook, she added, was the reason some people got their first PC. "Grandparents had to get on to Facebook because they value very deeply what's happening, and like staying up on things," she said. "Facebook keeps their finger on the pulse."

 

Although Facebook dominates, it's the not the only popular social platform in Turkey, which ComScore ranks as the world's fourth-most socially engaged nation. There are more than 7 million Twitter users in Turkey, including President Abdullah Gül who has nearly 2 million followers.

 

Facebook may be driving first-time PC purchases, but today the typical family in Turkey has multiple computers, according to Stefania Lorenz director of research at IDC. "The main use is for education, gaming, entertainment and social with more tech-savvy people moving to smartphones," she said.

 

"Awareness of new devices is very high and being connected is highly valued," said Ashman. She says that in the past few years, people have become more tolerant of others checking their smartphone or grabbing their laptop while in family or a social setting. "People used to complain, but not anymore," Ashman said. "They see the value in devices and are eager to get tips from people on how to get more out of their technologies."

 

Lorenz said that more than 70 percent of the PCs sold in Turkey are notebook computers. "Netbooks are still in place, but dropping from the amount sold in 2010, and tablets are not yet that popular," she said.

 

Turkey's growing middle class is finding technology to be more affordable and more essential in their daily lives. According to data from Intel, people living in Eastern Europe paid the equivalent of nearly 48 weeks of work for a new PC in 1995. In 2010, that dropped to 5 weeks and a new PC in 2014 is expected to cost just more than the equivalent of 2 weeks of work. With more people able to afford a PC in Turkey, sales have grown from 1 million units in 2003 to 5.5 million units in 2011, and Aydin expects to see 13.8 million units sell in 2015.

 

6973235876_c1a1c027db_o.jpgA crowd gathers at 5:00 a.m. for the opening of the first Media Markt electronics store in Istanbul, Turkey.(Flickr photo)

Familiar multinational brands such as HP, Asus, LG and Lenovo are popular in Turkey, but people also buy from local PC makers such as Kesfer, Achi and Expherf. Aydin said that most people purchase PCs directly from small channel stores, but that more are turning to relatively new electronics stores that include Electromart and Technolia.

 

The government's big bet on educational technology for Turkey's 18 million students gets underway this year. The so-called "Fatih" project is a 4-year program with a goal of training tens of thousands of teachers in schools across the county, equipping educators with laptops and providing tablet computers for students plus cloud computing infrastructure and interactive whiteboards for classrooms. The education initiative could bring 15 million tablet computers to the country, according to one forecast from Invest in Turkey.

 

Aydin believes Turkey's people -- the world's 17th largest population -- and its location at the crossroads between Europe and Asia will help turn it into a top 10 global economy.

 

"Our young population plus a growing middle class are keeping us in a favorable position, and our government policies are helping," he said. "But we will have to diversify our economy and diverge from depending too much on Europe."

 

 

 

Related stories

 

Filter News Archive

By author: By date:
By tag: