The semantic web is an exciting new evolution of the World Wide Web (WWW) providing machine-readable and machine-comprehensible information far beyond current capabilities. In an age of information deluge, governments, individuals and businesses will come to rely more and more on automated services, which will improve in their capacity to assist humans by “understanding” more of the content on the web. This has potentially far-reaching consequences for all businesses today.

More information on the web needs to be structured in a form that machines can ‘understand’ and process rather than merely display. It relies solely on a machine’s ability to solve complex problems by performing well-defined operations on well-defined data. Sir Tim Berners-Lee, inventor of the World Wide Web, has coined the term “Semantic Web” to describe this approach. Berners-Lee, Hendler and Lassila provide the following definition:

The Semantic Web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation.

—Tim Berners-Lee, Ora Lassila, James Hendler, Scientific American May 2001

What Is The Semantic Web?

The Semantic Web is a mesh of information linked up in such a way as to be easily processable by machines, on a global scale. You can think of it as being an efficient way of representing data on the World Wide Web, or as a globally linked database.

The Semantic Web was thought up by Tim Berners-Lee, inventor of the WWW, URIs, HTTP, and HTML. There is a dedicated team of people at the World Wide Web consortium (W3C) working to improve, extend and standardize the system, and many languages, publications, tools and so on have already been developed. However, Semantic Web technologies are still very much in their infancies, and although the future of the project in general appears to be bright, there seems to be little consensus about the likely direction and characteristics of the early Semantic Web.

What's the rationale for such a system? Data that is generally hidden away in HTML files is often useful in some contexts, but not in others. The problem with the majority of data on the Web that is in this form at the moment is that it is difficult to use on a large scale, because there is no global system for publishing data in such a way as it can be easily processed by anyone. For example, just think of information about local sports events, weather information, plane times, Major League Baseball statistics, and television guides... all of this information is presented by numerous sites, but all in HTML. The problem with that is that, is some contexts, it is difficult to use this data in the ways that one might want to do so.

So the Semantic Web can be seen as a huge engineering solution... but it is more than that. We will find that as it becomes easier to publish data in a repurposable form, so more people will want to publish data, and there will be a knock-on or domino effect. We may find that a large number of Semantic Web applications can be used for a variety of different tasks, increasing the modularity of applications on the Web. But enough subjective reasoning... onto how this will be accomplished.

The Semantic Web is generally built on syntaxes which use URIs to represent data, usually in triples-based structures, i.e., many triples of URI data that can be held in databases, or interchanged on the world Wide Web using a set of particular syntaxes developed especially for the task. These syntaxes are called "Resource Description Framework" syntaxes. URI - Uniform Resource Identifier

A URI is simply a Web identifier: like the strings starting with "http:" or "ftp:" that you often find on the World Wide Web. Anyone can create a URI, and the ownership of them is clearly delegated, so they form an ideal base technology with which to build a global Web on top of. In fact, the World Wide Web is such a thing: anything that has a URI is considered to be "on the Web".

The syntax of URIs is carefully governed by the IETF, who published RFC 2396 as the general URI specification. The W3C maintains a list of URI schemes.

RDF - Resource Description Framework

A triple can simply be described as three URIs. A language which utilises three URIs in such a way is called RDF: the W3C have developed an XML serialization of RDF, the "Syntax" in the RDF Model and Syntax recommendation. RDF XML is considered to be the standard interchange format for RDF on the Semantic Web, although it is not the only format. For example, Notation3 (which we shall be going through later on in this article) is an excellent plain text alternative serialization.

Once information is in RDF form, it becomes easy to process it, since RDF is a generic format, which already has many parsers. XML RDF is quite a verbose specification, and it can take some getting used to (for example, to learn XML RDF properly, you need to understand a little about XML and namespaces beforehand...), but let's take a quick look at an example of XML RDF right now:-

Why RDF?

When people are confronted with XML RDF for the first time, they usually have two questions: "why use RDF rather than XML?", and "do we use XML Schema in conjunction with RDF?".

The answer to "why use RDF rather than XML?" is quite simple, and is twofold. Firstly, the benefit that one gets from drafting a language in RDF is that the information maps directly and unambiguously to a model, a model which is decentralized, and for which there are many generic parsers already available. This means that when you have an RDF application, you know which bits of data are the semantics of the application, and which bits are just syntactic fluff. And not only do you know that, everyone knows that, often implicitly without even reading a specification because RDF is so well known. The second part of the twofold answer is that we hope that RDF data will become a part of the Semantic Web, so the benefits of drafting your data in RDF now draws parallels with drafting your information in HTML in the early days of the Web.

The answer to "do we use XML Schema in conjunction with RDF?" is almost as brief. XML Schema is a language for restricting the syntax of XML applications. RDF already has a built in BNF that sets out how the language is to be used, so on the face of it the answer is a solid "no". However, using XML Schema in conjunction with RDF may be useful for creating datatypes and so on. Therefore the answer is "possibly", with a caveat that it is not really used to control the syntax of RDF. This is a common misunderstanding, perpetuated for too long now.

Screen Scraping, and Forms

For the Semantic Web to reach its full potential, many people need to start publishing data as RDF. Where is this information going to come from? A lot of it can be derived from many data publications that exist today, using a process called "screen scraping". Screen scraping is the act of literally getting the data from a source into a more manageable form (i.e., RDF) using whatever means come to hand. Two useful tools for screen scraping are XSLT (an XML transformations language), and regular expression (in Perl, Python, and so on).

However, screen scraping is often a tedious solution, so another way to approach it is to build proper RDF systems that take input from the user and then store it straight away in RDF. Data such as you may enter when signing up for a new mail account, buying some CDs online, or searching for a used car can all be stored as RDF and then used on the Semantic Web.

The Power Of Semantic Web Languages

The main power of Semantic Web languages is that any one can create one, simply by publishing some RDF that describes a set of URIs, what they do, and how they should be used. We have already seen that RDF Schema and DAML are very powerful languages for creating languages.

Because we use URIs for each of the terms in our languages, we can publish the languages easily without fear that they might get misinterpreted or stolen, and with the knowledge that anyone in the world that has a generic RDF processor can use them. The Principle Of Least Power

The Semantic Web works on a principle of least power: the fewer rules, the better. This means that the Semantic Web is essentially very unconstraining in what it lets one say, and hence it follows that anyone can say anything about anything. When you look at what the Semantic Web is trying to do, it becomes very obvious why this level of power is necessary... if we started constraining people, they wouldn't be able to build a full range of applications, and the Semantic Web would therefore become useless to some people. How Much Is Too Much?

However, it has been pointed out that this power will surely be too much... won't people be trying to process their shopping lists on an inference engine, and suddenly come up with a plan for world peace, or some strange and exciting new symphony?

The answer is (perhaps unfortunately!) no. Although the basic parts of the Semantic Web, RDF and the concepts behind it are very minimally constraining, applications that are built on top of the Semantic Web will be designed to perform specific tasks, and as such will be very well defined.

For example, take a simple server log program. One might want to record some server logs in RDF, and then build a program that can gather statistics from the logs that pertain to the site; how many visitors it had in a week, and so forth. That doesn't mean that it'll turn your floppy disc drive into a toaster or anything; it'll just process server logs. The power that you get from publishing your information in RDF is that once published in the public domain, it can be repurposed (used for other things) so much easier. Because RDF uses URIs, it is fully decentralized: you don't have to beg for some central authority to publish a language and all your data for you... you can do it yourself. It's Do It Yourself data management.

The Pedantic Web

Unfortunately, there is an air of academia and corporate thinking lingering in the Semantic Web community, which has led to the term "Pedantic Web" being coined, and a lot of mis/disinformation and unnecessary hype being disseminated. Note that this very document was devised to help clear up some common misconceptions that people may have about the Semantic Web.

For example, almost all beginners to RDF go through a sort of "identity crisis" phase, where they confuse people with their names, and documents with their titles. For example, it is common to see statements such as:-

<http://example.org/> dc:creator "Bob" .

However, Bob is just a literal string, so how can a literal string write a document? What the author really means is:-

<http://example.org/> dc:creator _:b . _:b foaf:name "Bob" .

i.e., that example.org was created by someone whose name is "Bob". Tips like these are being slowly collected, and some of them are being displayed in the SWTips guide, a collection of Semantic Web hints and tips maintained as a collaborative development project.

Education And Outreach

The move away from the "Pedantic Web", to some extent, is all part of a movement to bring the power of the Semantic Web to the people. This is a well documented need:-

[...] the idea that the above URIs reveal a schema that somehow fully describes this language and that it is so simple (only two {count 'em 2} possible "statements"), yet looks like the recipe for flying to Mars is a bit daunting. Its very simplicity enables it to evaluate and report on just about anything - from document through language via guidelines! It is a fundamental tool for the Semantic Web in that it gives "power to the people" who can say anything about anything.

- EARL for dummies, William Loughborough, May 2001

RDF Schema and DAML+OIL are generally languages that need to be learned, however, so what is being done to accommodate people who have neither the time nor patience to read up on these things, and yet want to create Semantic Web applications? Thankfully, many Semantic Web applications will be lower end applications, so you'll no more need to have a knowledge of RDF than Amaya requires one to have a knowledge of (X)HTML. Trust and Proof

The next step in the architecture of the Semantic Web is trust and proof. Very little is written about this layer, which is a shame since it will become very important in the future.

In stark reality, the simplest way to put it is: if one person says that x is blue, and another says that x is not blue, doesn't the whole Semantic Web fall apart?

The answer is of course not, because a) applications on the Semantic Web at the moment generally depend upon context, and b) because applications in the future will generally contain proof checking mechanisms, and digital signatures. Context

Applications on the Semantic Web will depend on context generally to let people know whether or not they trust the data. If I get an RDF feed from a friend about some movies that he's seen, and how highly he rates them, I know that I trust that information. Moreover, I can then use that information and safely trust that it came from him, and then leave it down to my own judgement just to how much I trust his critiques of the films that he has reviewed.

Groups of people also operate on shared context. If one group is developing a Semantic Web depiction service, cataloguing who people are, what their names are, and where pictures of those people are, then my trust of that group is dependent upon how much I trust the people running it not to make spurious claims.

So context is a good thing because it lets us operate on local and medium scales intuitively, without having to rely on complex authentication and checking systems. However, what happens when there is a party that we know, but we don't know how to verify that a certain heap of RDF data came from them? That's where digital signatures come in.


In general, there are small- and large-scale systems, and interactions between the two will most likely form a huge part of the transactions that occur on the Semantic Web. Let's define what we mean by large-, medium-, and small-scale systems.

Large Scale

An example of a large-scale system is two companies that are undergoing a merger needing to combine their databases. Another example would be search engines compiling results based upon a huge range of data. Large-scale Semantic Web systems generally involve large databases, and heavy duty inference rules and processors are required to handle the databases. Medium Scale

Medium-scale Semantic Web systems attempt to make sense out of the larger-scale Semantic Web systems, or are examples of small-scale Semantic Web systems joined together. An example of the former is a company trying to partially understand two large-scale invoice formats enough to use them together. An example of the latter is of two address book language groups trying to create a super-address book language.

Small Scale

Small-scale Semantic Web systems are less widely discussed. By small-scale Semantic Web systems, we mean languages that will be used primarily offline, or piles of data that will only be transferred with a limited scope, perhaps between friends, departments, or even two companies.

Sharing data on a local level is a very powerful example of how the Semantic Web can be useful in a myriad of situations. In the next section on evolution we shall be finding out how interactions between the different sized systems will form a key part of the Semantic Web. SEM - SEmantic Memory

The concept of a SEmantic Memory was first proposed by Seth Russell, who suggested that personal database dumps of RDF that one has collected from the "rest" of the Semantic Web (a kind of Semantic Cloud) would be imperative for maintaining a coherent view of data. For example, a SEM would most likely be partitioned into data which is inherent to the whole Semantic Web (i.e., the schemata for the major languages such as XML RDF, RDF Schema, DAML+OIL, and so on), local data which is important for any Semantic Web applications that may be running (e.g. information about the logic namespace for CWM, which is currently built in), and data that the person has personally been using, is publishing, or that has been otherwise entered into the root context of the SEM.

The internal structure of a SEM will most likely go well beyond the usual triples structure of RDF, perhaps as far as quads or even pents. The extra fields are for contexts (an StID), and perhaps sequences. In other words, they are ways of grouping information within the SEM, for easy maintenance and update. For example, it should become simple to just delete any triple that was added into a certain context by removing all triples with that particular StID.

A lot of work on the Semantic Web has concentrated on making data stores (i.e., SEMs) interoperable, which is good, but that has lead to less work being conducted on what actually happens within the SEM itself, which is not good, because the representation of quads and pents in RDF is therefore up in the air. b developers to be investigating at this stage.

Evolution

A very important concept on the Semantic Web is that of evolution: going from one system into another. Two key parts of evolvability are partial understanding and transformability. We will find out next how these manifest themselves naturally when changing the scale of a system. Partial Understanding: Large Scale to Medium Scale

The concept of partial understanding is a very important one on the Semantic Web, and can often be found in older documents that came out about the same time as the Semantic Web was first being theorized.

An example of partial understanding when moving a large-scale system to a medium-scale system is of a company trying to make sense out of two invoices, one from Company A and one from Company B. The knowledge that both of the companies use similar fields in their invoices is well known, so a company trying to make sense out of the invoices can easily compile a master list of expenditures by simply scraping the data from the two invoice languages. Neither Company A nor Company B need to know that this is going on.

Indeed, TimBL included this example in his XML 2000 keynote:-

[...] what we'll end up doing in the future is converting things, so for example [...] in the Semantic Web we will have a relationship between two languages so that if you get an invoice in a language you don't understand, and you have... some business software which can pay invoices... by following links across the Semantic Web, your machine will be able to automatically convert it from one language to another, and so process it.

- Tim Berners-Lee

Transformability: Small Scale to Medium Scale

An example of a small-scale Semantic Web system joined together to make a medium-sized Semantic Web system could be two groups that have published address book formats wanting to make a larger and better address book format by merging the two current formats together. Anyone using one of the old address book formats could probably convert them into the new format, and hence there would be a greater sense of interoperability. That's generally what happens when one goes from a small-scale Semantic Web system into a medium-scale Semantic Web system, although this is often not without some disadvantages and incompatabilities. The Semantic Web takes the sting out of it by automating 99% of the process (it can convert field A into field B, but it can't fill in any new data for you... of course, new fields can always be left empty for a while).

Facilitating Evolvability

How do we document the evolution of languages? This is a very important and indeed urgent question, and one which TimBL summarized quite neatly:-

Where for example a library of congress schema talks of an "author", and a British Library talks of a "creator", a small bit of RDF would be able to say that for any person x and any resource y, if x is the (LoC) author of y, then x is the (BL) creator of y. This is the sort of rule which solves the evolvability problems. Where would a processor find it? [...]

- Semantic Web roadmap, Tim Berners-Lee

One possible answer is: third-party databases. Very often, it is not practical to have (in TimBL's example) either the LoC or or BL record the fact that two of their fields are the same, so this information will have to be recorded by a reputable third party.

One such "third party" that was set up to investigate this is SWAG, the Semantic Web Agreement Group. Co-founded by Seth Russell, Sean B. Palmer, Aaron Swartz, and William Loughborough, the group aims to ensure interoperability on the Semantic Web. They set up what is possibly the first ever third-party Semantic Web dictionary, the WebNS SWAG Dictionary.

Intertwining: Difficult, But Important

Although the Semantic Web as a whole is still very much at a grassroots kind of level, people are starting to take notice; they're starting to publish information using RDF, and thereby making it fit for the Semantic Web.

However, not enough is being done to link information together... in other words, the "Semantic" part of the "Semantic Web" is coming along nicely, but where's the "Web"? People are not using other people's terms effectively; when they use other terms, they often do so because they're aimlessly trying to help, but just generating noise in the process. If you're going to use other people's data, try to find out what the advantage is in doing that beforehand. For example, just because you use the term "dc:title" in your RDF rather than a home brewed ":title", does that mean that suddenly a Dublin Core application is going to be able to "understand" your code? Of course not. What it does mean however is that if the "dc:title" property in your instance is being put to use in such a way that information may be need to repurposed from it in the near future, then you may gain some advantage because "dc:title" is such a commonly used term, you may be able to modify a current rules file, or whatever.

Another part of the problem may be due to a problem similar to the one that the early World Wide Web experienced: why bother publishing a Web site when there is no one else's site to link to or be linked to? Why bother publishing a Web site when so few people have browsers? Why bother writing a browser when there are so few Web sites? Some people have to make the leaps for it all to happen, and that's a slow process.

What can be done about the situation? Well, it may hopefully sort itself out. Another well-known principle that applies very well to Semantic Web applications is that there is no point in reinventing the wheel; viz., if someone has already invented a schema which contains a comprehensive and well understood and used set of terms that you also need to use in your application, then there is no point in trying to redo the work that they have done. At some points this may lead to a form of "schema war", but survival of the fittest should see to it that a core of the best schemata are put to the most use. This is probably what TimBL means when he says that terms will just "emerge" out of the Semantic Cloud, that when people keep using the term "zip", rather than just recording that my term "zip" is equivalent to your term "zip" which is equivalent to someone else's term "zip", we'll all just use the same URI, and hence interoperability will be vastly improved. Does It Work? What Semantic Web Applications Are There?

I addressed this in a previous article: The Semantic Web: Taking Form, but it does bear repeating: the Semantic Web already works, and people are using it.

Semantic Web Applications

Unfortunately, the Semantic Web is dissimilar in many ways from the World Wide Web, including that you can't just point people to a Web site for them to realise how it's working, and what it is. However, there have been a number of small-scale Semantic Web applications written up. One of the best ones is Dan Connolly's Arcs and Nodes diagrams experiment:-

One of the objectives of the advanced development component of the Semantic Web activity is to demonstrate how RDF and Semantic Web technologies can be applied to the W3C Process to increase efficiency, reliability, etc. In the early stages of developing an RDF model of the W3C process, the tools I was using to visualize the model while working on it started working well enough that I started applying them to all sorts of stuff.

- Circles and arrows diagrams using stylesheet rules, Dan Connolly.

Of course, this is a rather demonstration-oriented Semantic Web project, but it does illustrate the feasibility of applications being easily built using Semantic Web toolkits.

Another good example of the Semantic Web at work is Dan Brickley et al.'s RDFWeb. RDFWeb is a RDF database-driven hypermedia blogspace, a site where all information is stored as RDF, and then that RDF used to render XHTML. Plans are underway to incorporate more advanced Semantic Web principles into the site. What Can I Do To Help?

There are many ways in which one can contribute to creating the Semantic Web. Here's a few of them:-

   * Publish some globally useful data in RDF.
   * Write an inference engine in the language of your choice.
   * Spread the word: do some education and outreach.
   * Help in the development of RDF Schema and/or DAML.
   * Contribute in representing state in RDF, a rather neglected field of research.
   * Apply your own development backgrounds to the Semantic Web, give us all a new angle to consider it from.
   * Instead of using some proprietary system for your next application, consider making it a Semantic Web project instead.

There are many other ways in which one can help as well: ask in the community for more details. What Now? Further Reading

As of 2001-09, the amount of Semantic Web Education and Outreach materials can only really be described as "pitiful" (hence this introduction, for a start). Here's a short list of some of the good primers and materials currently available, in no particular order:-

   * http://www.w3.org/2000/10/swap/Primer (Getting Into RDF & Semantic Web Using N3)
   * http://www.w3.org/DesignIssues/Semantic (Semantic Web Roadmap)
   * http://purl.org/swag/whatIsSW (What Is The Semantic Web?)
   * http://uwimp.com/eo.htm (Semantic Web Primer)
   * http://logicerror.com/semanticWeb-long (Semantic Web Introduction - Long)
   * http://www.scientificamerican.com/2001/0501issue/0501berners-lee.html (SciAm: The Semantic Web)
   * http://www.xml.com/pub/a/2001/03/07/buildingsw.html (Building The Semantic Web)
   * http://infomesh.net/2001/06/swform/ (The Semantic Web, Taking Form)
   * http://www.w3.org/2001/sw/Activity (SW Activity Statement)
   * http://www.w3.org/2000/01/sw/ (SWAD)

For more information, all the latest news etc., Dave Beckett's Resource Description Framework (RDF) Resource Guide is absolutely brilliant.

Many Semantic Web and RDF developers hang out on the RDF IG IRC chatroom, on irc.openprojects.net, #rdfig.

Ontology and Ontology Language

More on the Semantic Web

 

This article is issued from Wikibooks. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.