Monday, June 22, 2009

All for Good: Bringing search, scale and openness to community service

While many organizations are doing great work to enable community service locally, it's not simple to search across opportunities from a variety of places to find what's right for you. We have some experience finding relevant information from among many scattered sources, and when we learned that President Obama and the First Lady were making community service a top priority even before taking office, we thought we could help make a difference.

With our mission in mind, a group of "20%" engineers, designers, and program managers from Google and other tech companies began work on All for Good, a new service to help you find volunteer events in your community, and share those events with your friends.

All for Good provides a single search interface for volunteer activities across many major volunteering sites and organizations like United Way, VolunteerMatch, HandsOn Network and Reach Out and Read. By building on top of the amazing efforts of existing volunteer organizations like these, we hope to amplify their efforts.


And in the spirit of open data, All for Good has a data API that anyone can use to search the same data displayed on the All for Good site. All for Good was developed entirely using App Engine and Google Base, with the full code repository hosted on Google Code Hosting. We'll be inviting developers to contribute to the open source application soon, so stay tuned.

Just as releasing the Maps API led to an surge of independent and creative uses of geographic information, we've built All for Good as a platform to encourage innovation in volunteerism, as much as an end product in itself. We hope software developers will use the API or code to build their own volunteering applications, some even better than the All for Good site!

And if you want to volunteer your video-creating skills to make a difference, check out YouTube Video Volunteers, a new platform designed to make connections between non-profits with video needs and skilled video makers who can help broadcast their causes through video.

All for Good is a new kind of collaboration between the private, public and nonprofits sectors to build free and open technology to empower citizens. Similar to the Open Social Foundation, we helped create a new organization called Our Good Works to make sure that the API, the platform, and social innovation that they inspire are supported for the long term. The leadership includes Reid Hoffman, Chris DiBona, Arianna Huffington and Craig Newmark on the board, and the organization aims to build support volunteerism services like All for Good.

Today the First Lady is in San Francisco calling on Americans to improve our communities by rolling up our sleeves and putting our time and talent towards doing good. You can learn more at serve.gov, where we're proud to power search.

A new landmark in computer vision

Science fiction books and movies have long imagined that computers will someday be able to see and interpret the world. At Google, we think computer vision has tremendous potential benefits for consumers, which is why we're dedicated to research in this area. And today, a Google team is presenting a paper on landmark recognition (think: Statue of Liberty, Eiffel Tower) at the Computer Vision and Pattern Recognition (CVPR) conference in Miami, Florida. In the paper, we present a new technology that enables computers to quickly and efficiently identify images of more than 50,000 landmarks from all over the world with 80% accuracy.

To be clear up front, this is a research paper, not a new Google product, but we still think it's cool. For our demonstration, we begin with an unnamed, untagged picture of a landmark, enter its web address into the recognition engine, and poof — the computer identifies and names it: "Recognized Landmark: Acropolis, Athens, Greece." Thanks computer.

How did we do it? It wasn't easy. For starters, where do you find a good list of thousands of landmarks? Even if you have that list, where do you get the pictures to develop visual representations of the locations? And how do you pull that source material together in a coherent model that actually works, is fast, and can process an enormous corpus of data? Think about all the different photographs of the Golden Gate Bridge you've seen — the different perspectives, lighting conditions and image qualities. Recognizing a landmark can be difficult for a human, let alone a computer.

Our research builds on the vast number of images on the web, the ability to search those images, and advances in object recognition and clustering techniques. First, we generated a list of landmarks relying on two sources: 40 million GPS-tagged photos (from Picasa and Panoramio) and online tour guide webpages. Next, we found candidate images for each landmark using these sources and Google Image Search, which we then "pruned" using efficient image matching and unsupervised clustering techniques. Finally, we developed a highly efficient indexing system for fast image recognition. The following image provides a visual representation of the resulting clustered recognition model:


In the above image, related views of the Acropolis are "clustered" together, allowing for a more efficient image matching system.

While we've gone a long way towards unlocking the information stored in text on the web, there's still much work to be done unlocking the information stored in pixels. This research demonstrates the feasibility of efficient computer vision techniques based on large, noisy datasets. We expect the insights we've gained will lay a useful foundation for future research in computer vision.

If you're interested to learn more about this research, check out the paper.

Grab this Widget ~ Blogger Accessories