Automating The Google Knowledge Graph With Google's Knowledge Vault

Sep 3, 2014 - 8:29 am 11 by
Filed Under Google

Vault

The New Scientist reports that Google is building a version of the knowledge graph that expands its knowledge through algorithms at mass scale - Google calls it the Knowledge Vault.

Google is building the largest store of knowledge in human history – and it's doing so without any human help.

Instead, Knowledge Vault autonomously gathers and merges information from across the web into a single base of facts about the world, and the people and objects in it.

I honestly thought the knowledge graph was not by hand either. Dumb me. Okay, I am not that dumb. The knowledge graph was by no means by hand. I am confident Google didn't hire armies of people to copy and paste content into a database for them.

The Knowledge Vault, in my opinion, is just better at the automated part. As Google continued to revamp and improve the knowledge graph, it became better and picking off content from your web site and storing it in a more structured fashion, which Google can then use as answers without credit.

A statement like this from the article makes me go wow:

This existing base, called Knowledge Graph, relies on crowdsourcing to expand its information. But the firm noticed that growth was stalling; humans could only take it so far.

Really? That cannot be accurate.

So Google decided it needed to automate the process. It started building the Vault by using an algorithm to automatically pull in information from all over the web, using machine learning to turn the raw data into usable pieces of knowledge.

I find this hard to believe.

Google used algorithms to pick off data from sources such as "Wikipedia, subject-specific resources like Weather Underground, publicly available data from Freebase.com, and Google search data." In fact, on that page, Google says Google gets data for the knowledge graph in an "automated" fashion, so there can be problems and they want them reported.

The information in these sections is compiled by automated systems, so there's always a chance that some of the information is incorrect or no longer relevant.

I assume the Knowledge Vault is simply better at crawling, indexing and borrowing content from more sources, in a more automated fashion, than the Knowledge Graph.

So are you concerned now? When does this become more than a swiss army knife and leave you out of the equation?

Forum discussion at WebmasterWorld.

Image credit to BigStockPhoto for vault

 

Popular Categories

The Pulse of the search community

Follow

Search Video Recaps

 
- YouTube
Video Details More Videos Subscribe to Videos

Most Recent Articles

Search Forum Recap

Daily Search Forum Recap: December 3, 2024

Dec 3, 2024 - 10:00 am
Google Search Engine Optimization

Google: Just Because You Call Yourself It, It Doesn't Mean You Will Rank For It

Dec 3, 2024 - 7:51 am
Google Ads

New Google Ads Customer Match For Google Analytics Audiences

Dec 3, 2024 - 7:41 am
Google

Google Search YouTube Summary Pilot: Quick Takes & Key Takeaways

Dec 3, 2024 - 7:31 am
Google

Google Tests Underline Title, Link & Domain On Hover In Search Results

Dec 3, 2024 - 7:21 am
Google

Google Gemini Testing New & Improved Link Sources

Dec 3, 2024 - 7:11 am
Previous Story: Google's Site Command Not A Great Estimate