Thursday, March 17, 2011

Tsunami warnings now faster, more accurate

As the deadly tsunami generated by Friday's massive earthquake off the coast of Japan headed toward the United States, scientists at the National Oceanic and Atmospheric Administration's (NOAA) Center for Tsunami Research tracked its progress in real-time.
Dozens of deep-ocean tsunami-monitoring sensors more than three miles beneath the surface of the Pacific Ocean picked up information on the silent swell of water and transmitted it by way of a satellite to the Pacific Marine Environmental Laboratory in Seattle, Wash.

NOAA energy map shows the intensity of the tsunami caused by Japan's magnitude 8.9 earthquake. Darker red colors are more intense. (Image: Ho New/Reuters)
There, scientists crunched the data and quickly developed real-time predictions about how and when the tsunami would reach select locations in Hawaii, Alaska and the U.S west coast. The models predicted the wave arrival time, estimated wave height and the likely extent of inundation for about 50 communities likely to be affected.
When the data indicates danger, first responders in those communities get plenty of time to put evacuation plans into motion to limit human loss.
That kind of real-time, precision forecasting is a far cry from what was available in 2004 during the massive tsunami in the Indian Ocean, said Diego Arcas, a scientist with the NOAA Center for Tsunami Research (NCTR). That tsunami nearly obliterated the Indonesian coastline and that of other countries, killing hundreds of thousands without warning.
"It's almost a whole new world since 2004" in the field of tsunami forecasting, Arcas said.
Hundreds of people were killed and whole cities devastated in Japan by one of the worst earthquakes in over 100 years. The quake, which measured 8.9 on the Richter scale, generated a huge tsunami that inundated parts of Japan and put almost the entire Pacific coast line on high tsunami alert.
The effects of the quake, in terms of human loss and economic damage are expected to be huge.
The NCTR provides support to the national Tsunami Warning Center (TWC). Its mission is to develop numerical models for use by the TWC to develop faster and more reliable real-time tsunami forecasts. The technology used today by the NCTR is still being tested by the TWC for issuing tsunami warnings.
But it already represents the next step in tsunami modeling, Arcas said. Six years ago, there were just eight deep-sea sensors in the Pacific Ocean to monitor for tsunamis. Today, there about 30 of NOAA's Deep-ocean Assessment and Reporting of Tsunamis (DART) buoys collecting such data and beaming it to the TWCs around the country.
There are also about 20 DART systems in the Atlantic and about half a dozen in the Indian Ocean.
When a tsunami travels across the ocean and passes over a DART system, the sensor measures the change in sea levels and reports it back to the TWCs. With first-generation DART sensors, alerts were triggered only when sea-level measurements exceeded specific thresholds.
Current DART systems feature two-way communications that allow forecasters to get measurement data on demand. The sensors are also so sensitive that they can detect an ocean level rise of less than one centimeter, Arcas said. "So we have the ocean instrumented much better than it was five years ago," he said.
The data gathered from these deep sensors give tsunami modelers more information to work with compared to the data generated by tidal gauges. Combining the improved measurement capabilities with historical data -- and data about bathymetry (ocean depth) and topography -- scientists can predict tsunamis far more accurately, he said.
In fact, given the right set of data, scientists at the NOAA today can develop simulations of up to four hours of tsunami activity in about 10 minutes, Arcas said. "Usually, the largest waves happen within the first four hours of a tsunami," he said. For first responders and emergency managers, "that is the most important information they want to get out of a warning."

New Facebook vulnerability patched

Facebook has quietly fixed a vulnerability discovered recently by two student researchers that allowed malicious websites to access a Facebook user's private data without permission and post malicious links onto their profile.
Students Rui Wang and Zhou Li contacted security firm Sophos and told them the flaw they found made it possible for any web site to impersonate other sites which had been authorized to access users' data such as name, gender and date of birth. In other words, if a user has accessed any site - such as YouTube, or gaming sites and news sites -- and had given the site access to their Facebook profile, the potential was there for a malicious site to have access to their sensitive data. The researchers also found it was possible for the malicious site to pose as a legitimate web site and publish content on the visiting users' Facebook wall -- a common way malware is spread on the social network.
Users were at risk if they were to visit a malicious web site while logged into Facebook. The flaw was the result of a problem within one of Facebook's authentication mechanisms. The students explain the problem in a YouTube video found here.
The vulnerability has already been addressed by Facebook, since the students practiced responsible disclosure and informed Facebook's security team about the flaw. Facebook Security responded by fixing the vulnerability quickly, according to Sophos' Graham Cluley.
"Clearly Facebook's website is a complex piece of software, and it is almost inevitable that vulnerabilities and bugs will be found from time to time," said Cluley. "The risk is compounded by the fact that there's so much sensitive personal info about users being held by the site -- potentially putting many people at risk."
Facebook has fixed many research-discovered bugs in recent years. Earlier this year it patched a flaw that allowed private chats to be made public. Last week, Facebook announced new security enhancements on the site.

Google Docs Adds Discussions

Google Docs is designed for writing. But often there's a need to write about what's been written. That's why, on Wednesday, Google is planning to add discussions to Google Docs.
Discussions aren't the same as comments. They're not discrete notes placed in the text. Rather, they're intended to be yet another way to help people work together, to hash out ideas. A Docs discussion descends downward in its own window pane like a miniature forum. It's a well-designed and eminently useful feature.
Jelli has created a user-generated radio experience, where users vote songs onto or off radio stations, join in chats, and create song blocks.
Scott Johnston, a group product manager for Google, describes discussions as a way to accelerate collaboration.
Discussions integrate with e-mail and they're designed with business processes in mind -- there's a Resolve button to underscore how discussions can be used for document development and approval.
This is why people love cloud computing: New features just appear, without the need to download or install any additional software.
Google developed over 130 features last year for Google Apps, its online application suite. And in order to iterate that rapidly, Google depends heavily on people like Johnston.
Johnston came to Google in late 2006 through its acquisition of JotSpot, which became Google Sites. He oversaw the creation of discussions in Docs.
"I'm sort of a startup nut," he said. "I'm addicted to the energy. ...I don't do well at large companies." He says he expected to help re-release JotSpot on Google's infrastructure and to then depart for another startup. But he ended up being convinced to stay by Google managers Jonathan Rochelle and Bradley Horowitz.
Rochelle in particular, during the six years he's been at Google and working on Docs, has tried to maintain teams that operate like startups inside Google. Johnston says that Google itself is run like a federation of startups, so it's not as if the Docs group is alone in its attempt to maintain agility amid corporate growth.
About a year ago when he was on the verge of leaving Google, Johnston says his view of what a large company could be changed following a poorly received pitch meeting. Trying to address concerns raised about his suggested project, he made contact with Google engineer in Sydney, Australia, and subsequently traveled to Australia for a face-to-face meeting.
The meeting went well and led Johnston to believe that he could recapture the excitement of being an entrepreneur without leaving Google. Now, he says, he's running some 20 to 25 projects with various Google engineers as if they're startups.
"I feel almost at this point like I'm an angel investor," he said. "It's like this ideal world where I have these amazing resources. I have funding, if I can convince my boards to fund me."
For Google, maintaining a vital culture of entrepreneurship is believed to be necessary to retain talent, particularly with rivals like Facebook seeking to lure high-value employees away. "We're constantly trying to figure out how do you keep entrepreneurship going while still being the size we are," said Johnston. "And I think it's starting to click."
Perhaps not coincidentally, Google recently raised base salaries for its employees by 10% and, for non-executive employees, shifted from offering bonuses to salary increases.
Johnston says he's seen Google find itself in the past few years. "When I started, I don't think we were that sure of ourselves," he said. "All of a sudden there was all this attention on us and I don't think we were ready for that. I've seen the company mature a lot. And that's another reason that I stayed. The company has moved to a place where it seems to care as much about my health and well-being as it does about my output."