Skip to content

New AdManager extension

The folks at the American Academy of Ophthalmology asked us to create an AdManager that could be managed in full from the wiki itself. They were using BanManPro which allows you to create zones, with each zone corresponding to a particular ad. AAO wanted per-page or per-category control of which zone was assigned to which page. We created the AdManager extension but we designed it so it could be used with OpenX or really any ad service. The extension currently sticks the ads in the sidebar. You can place any number of zones there. See it in action here.

I think the way we hooked into the sidebar was neat and the tips I took from it are useful for any other extension that adds to the sidebar. We of course used the SkinBuildSidebar hook. The problem is that monobook automatically puts a header on top of the new sidebar item – in this case AdManager1 – and puts a border around it. Neither of these was desired. Good thing that MediaWiki also gives it an id (p-AdManager1) so you can use some CSS to hide it. The trick here is that there can be multiple zones added so we may have other ids like p-AdManager2 and so on. So here’s the nifty CSS that will match all of those ids:

div[id*='AdManager'] h5 {
	display: none; #Hide the header
div[id*='AdManager'] .pBody {
	border: none; #and the border
	padding-left: 0; #override the normal padding

Unfortunately, you’ll need admin rights to see the 2 special pages that handle the settings. And that’s that.

Categories: Uncategorized.

Tags: ,

Integrating Mediawiki while retaining your web site’s look and feel

We finished this project quite a few months ago but I never got around to bragging about it. The nice folks at already had a web site and wanted to add a wiki that would completely match it.
There are a few interesting points here. We based it on the Vector skin. The sidebar was a natural fit since the rest of the site already had one. The main problem was positioning the (always difficult) vector tabs and the personal links at the top. The good news is that it can be done, with a bit of trial and error.

You can see it in action here.

We then pulled off the same idea for another one of their sites. (Both of these skins have since been revamped by a fellow WikiWorks consultant.)

I think the main lesson for me was how easy it is to integrate MediaWiki with the rest of a site. We’ve all come across well-designed sites with a wiki component. Click the wiki and you’re off into monobook-land with no (apparent) links to help you return. Now what? As it turns out, it doesn’t have to be that way. With a little bit of effort, that nice theme can be applied to MediaWiki.

Categories: Uncategorized.


Notes from Wikimania 2011

My wife and I got back a week ago from our trip to Israel, which included the Wikimania conference in Haifa – Wikimania is an annual conference mostly about Wikipedia, that, like the Olympics, moves around the globe every time. Wikimania, in this case, was half the goal and half an excuse to visit Israel – which is where I grew up, and where my extended family lives, but which I hadn’t visited in six years.

First, I’ll say that visiting Israel was an absolutely amazing experience. We managed to see a huge amount of the country – Haifa, Tel Aviv, Jaffa, Jerusalem, the West Bank, etc. We met secular Jews, orthodox Jews, Arabs, Russians, Americans (plus all the really diverse attendees of Wikimania, but that’s a separate story). We visited ancient ruins, high-tech parks, leafy neighborhoods, government offices, and settlements on the edge of a desert. And my extended family could not have been nicer – and the same actualy holds true for almost everyone I met, which really defies the stereotype of rude Israelis. (I was 90% enthused about it and 10% – dare I say? – disappointed.)

With that said, on to the conference. Here are some general observations of mine:

  • I think there was a general view that this was might have been the best-organized, best-run Wikimania ever. The venue was great, the food was amazing, the parties were on-point, the keynote speakers (Yochai Benkler and Joseph Reagle) were well-chosen, and there were lots of nice little touches, like awesome videos at the opening and closing (the closing one must have been prepared in less than a day). The one technical glitch was the lack of enough power outlets – but people managed. I’ve been to four of the seven Wikimanias, and they’ve all been enjoyable and well-run, but this one just seemed to have that something extra. I’m biased, though, because I grew up in Haifa, so maybe I’m the wrong person to ask.
  • I have to admit that I had soured a little on Wikimania beforehand – it’s a great experience for anyone who hasn’t been before, but at some point one can get a little Wikipedia’d out. There are always technical discussions about Mediawiki, but never enough, in my opinion. This one had more than usual, though, and a lot of interest among the attendees, so that was a very positive step. As always, I got to talk to other developers, and as always I got to meet some of them for the first time – like Niklas, two ahead of me on the contributors list.
  • The next Wikimania will be in Washington, D.C. At the end of the conference, the D.C. organizers gave a shambolic presentation that I think had everyone worried. Still, the conference will be in the U.S., which gives them a huge natural advantage in terms of getting both attendees and speakers. I plan to be there, in any case.

And, since we’re a Semantic MediaWiki-related company, some Semantic MediaWiki-related comments:

  • There was one directly SMW-related talk, given by Denny Vrandecic, Daniel Kinzler and me. The talk was mostly about adding SMW to Wikipedia, and not about the software per se. Denny talked about the basic premise of SMW, I went into the details and showed some demos, and then Denny and Daniel talked about the planned upcoming “Wikidata” project, which is meant to supply infobox data to all the different language Wikipedias (and, via RDF, to the world), using SMW as the backend. We had 40 minutes to talk, but we could have easily talked for twice that long – we barely talked about the Semantic Web, didn’t mention projects like DBpedia and Freebase, and the SMW demos were quite minimal. Also, there was a packed room, with about 60 people, and lots of questions and comments at the end. Anyway, you can find more information about the Wikidata proposal here.
  • Wikimedia operations guy Ryan Lane gave a talk about the WMF’s server management, where he mentioned that Semantic MediaWiki was used to store details about their setup. (You can see an explanation of that here.) That was pretty cool.
  • There was a session called “Ask the developers“, where I found out about the MediaWiki style guide, which I hadn’t seen before. It’s awesome, and I’ve actually already modified the error-messages display in Semantic Forms to match what’s in the guide.
  • As at previous Wikimanias, there was a good amount of discussion about making editing easier, both of template calls and of wiki syntax in general. Brion Vibber, the head MediaWiki programmer, talked about it a few times, as did Sue Gardner, the WMF executive director. And Jimmy Wales focused a lot of his talk, which was the final talk of Wikimania, on improving the interface on Wikipedia to make things like requesting page moves easier. That last one is not directly editing-related, but it does tie in to making smarter, more user-friendly interfaces. If anything comes out of either initiative, it will undoubtedly be interesting to the Semantic MediaWiki community.

Categories: Uncategorized.

Why Semantic MediaWiki is better than Sharepoint

We have a new essay up, linked from our main site: Semantic MediaWiki vs. Sharepoint. When people talk about the competitors to Semantic MediaWiki, it’s Microsoft Sharepoint that always comes up first – it’s popular, and for many people the name “Sharepoint” is pretty much synonymous with collaboration software. This essay tries to make our case for why SMW is actually the better tool, in both quality and cost. I put together the essay based on our collective knowledge about Sharepoint, in addition to feedback from people who have used both pieces of software extensively.

Categories: Uncategorized.

SMWCon in DC, the writeup

We had SMWCon, the Semantic MediaWiki Conference, in Arlington, Virginia (right near Washington, D.C.) about two and a half weeks ago. I meant to blog about it right afterwards, but things kept interfering. Anyway, here’s my writeup.

We had over 30 attendees, from all over the United States in addition to Germany, the Netherlands, China and Japan. The plurality, though, and maybe even the majority, were from the area.

Some thoughts about the conference:

  • The biggest discovery, in my opinion, was the success of the tutorial day. Previously we’ve always had two-day conferences, but for this one we added a third day at the beginning, which was intended to be strictly for tutorials – letting people learn about the basics of MediaWiki, Semantic MediaWiki and its spinoff extensions. It was useful both in the context of the conference – so that people could attend having a good sense of what all the technology was about – and as simply a training session for the software. I led the first two of the four tutorial sessions. There were around 25 people attending – mostly people from the conference, although some attended only the tutorial day. Each attendee paid around $100 to attend (dinner included); which, as training sessions go, is very much on the cheap side. Still, the tutorial ended up basically paying for the rest of the conference, with a little bit of surplus at the end; which was a nice state of things.
  • The rest of the conference was great too. As usual, we had a mix of talks by developers, users/administrators who shared interesting things they were doing with Semantic MediaWiki, and some talks on more general, sort of cerebral topics (like my talk about the future of semantic wikis). We definitely had more than enough material to cover both days.
  • This was probably the most corporate and enterprise-y Semantic MediaWiki meeting yet. The conferences in the U.S. tend to be more corporate anyway than the ones in Europe, which tend to have a higher share of academics and “hackers”. But this one especially was pretty serious – lots of people representing various organizations, including, as you’d expect, many working either directly or indirectly for the U.S. government.
  • I got to meet a bunch of people that I had only talked to online, which is always neat – there were probably about 10 people there that I had only talked to via email, wiki talk page, etc.
  • We also had some celebrities – well, quasi-celebrities – in the group, including Sherrod DeGrippo, who famously (or infamously, depending on your opinion) shut down the wiki Encyclopædia Dramatica last month and redirected it to an awesome new Semantic MediaWiki-based wiki, Oh Internet. (When I first met her, I told her it was great meeting an internet celebrity – she was nonplussed, and responded, “I’m more like – what do you call the kind of celebrity that most people hate?” “Celebrity,” I answered.) We also had two semantic-web pioneers: Mike Dean from Raytheon/BBN (who helped to arrange the conference) and Mark Greaves of Vulcan. There was also my friend Mike Cariaso, who gets a steady stream of press due to SNPedia, the site he co-founded, most recently a nice writeup in Reason magazine.

The slides from talks have mostly been uploaded, and are linked on the wiki page. The talks were also videotaped, but the videos haven’t been uploaded yet. It’s in progress, though – another SMW user has graciously volunteered to help with the process, so my guess is that within a few weeks the videos will start showing up on of your favorite video-sharing sites.

Categories: Uncategorized.

Can Semantic MediaWiki cross the chasm?

I recently read “Crossing the Chasm” by Geoffrey Moore, which is a book I’d heard about a few years ago, described by someone as required reading for tech entrepreneurs.

The basic premise of the book is that the rise in popularity of tech products is very rarely a smooth upward curve: even for products that have a nice initial burst of popularity, it’s very difficult to move past the early user base and start to get acceptance among the general population (or the population who might use one’s product, in any case). That’s because those early adopters are usually the ones who already seek out and embrace new technologies; while the majority of people are either, in the book’s vocabulary, “pragmatists” (who just care about getting things done with a minimum of headache) or “”skeptics” (who actively dislike switching from whatever they’re using now).

When trying to move from the early adopters to everyone else, Moore argues, everything has to change: your marketing pitch, your sales strategy, the array of additional services and peripherals around the product (if any even existed), and even your company culture.

“Crossing the Chasm” then gives a prescription for the marketing and sales strategy to use: find a suitable niche market, and then put all your focus on that, trying to monopolize it to the greatest extent possible. The operative theory is that pragmatists and skeptics tend not to switch over to new technology unless they start to see people around them using it; so you need to focus on a very small group of people and keep marketing to it until many of them have become convinced by each other’s examples to start using your software. Once that happens, you can let your success among that group propagate out to related markets, to other departments within organizations you’ve already sold to, etc.

On top of that, there’s then a specific branding strategy, to use when marketing to that niche group: find the main software you’re trying to replace, then describe yourself as like that software, but with one improvement: the pitch should sound like, “it’s like X, but Y”. That applies even if your software has more than one benefit over your competitor: the key is, at all costs, not to confuse your potential new customers or overload them with facts, which would give them an excuse to tune you out.

There are some case studies cited in the book; the one that struck out at me was Documentum, a document-management software company which did just that – in 1993, after several years of flat revenues, they decided (based on reading “Crossing the Chasm”’s first edition, it turns out) to focus on a tiny niche: the regulatory-affairs departments at large pharmaceutical companies. As the book puts it, the company was paradoxically “reducing its market scope from ‘all personnel who touch complex documents in all large enterprises,’ to maybe one thousand people total on the planet.” It worked for them, though – they started doubling their revenues every year, they went public in 1996, and in 2003 they were bought by EMC.

Crossing the Chasm was first published in 1991, and then re-published in 2000 with extensive rewrites; the 2000 version is the one I read. So – is it still relevant?

There are ways in which “Crossing the Chasm” is an outdated book, especially for the open-source web software business. The overriding feeling of the book is one of urgency – in its parlance, you have to, with great discipline, “secure a beachhead”, before you can be attacked by rival companies or by “vulture capitalists”, VCs looking for struggling companies to buy up and exploit. The source of all that angst is that, for a company developing proprietary software, the “burn rate”, the amount of money it takes to keep the company going, pay developers’ salaries, pay rent on the office, etc., can be large – over $100,000 per year per employee, on average. For a 20-person startup, that’s over $2 million a year, so if you’re not making that much in revenue every year, you’re in constant danger.

There’s just less urgency in open-source. It’s true that some open-source software companies do follow the proprietary software model – the software is developed by a small group of people who work together in an office, and they just happen to release the resulting code for free. But I think that’s the exception: most of the time in open source, the development work is split between paid and volunteer developers, and even the software’s paid developers can be split among different organizations. (Also, developers often work remotely, saving on office expenses – though that’s happening more and more for propriety software as well.) All of that serves to keep development costs much lower for software of what I think is comparable quality.

Beyond that, the nature of marketing software in 2011 is almost another world from marketing software in, say, 1995. Back then, the success of the Netscape browser in getting market share through free internet downloads pointed the way of the future; but for the most part, software was sold in stores or through third-party sellers, and people learned about new software from their acquaintances, or through a network of salespeople, or by reading trade publications. These days all of those still exist, but we also have tech blogs, Twitter feeds, and web searches. And in the case of web software, people can instantly check out examples of the software in action. And in the case of open-source web software, they can also download it themselves and try it out for free. (Though proprietary software can also have free evaluation periods and the like.) Which means that, unlike 15 years ago, lots of people can find out about, and use, your software, without a single sales call having to be made (even if people still called on the phone, which apparently no one does any more).

(Finally, there’s a third major difference, which is that much of the software used these days is hosted directly on web sites – or “in the cloud”, as people like to say. And WikiWorks is in that business too, with the Referata wiki-hosting service. But my main topic here is Semantic MediaWiki, which has to be downloaded.)

With all that said, I think the book is still relevant. Human nature hasn’t changed in the last 20 years; most people still won’t switch to using software until they know that other people in their situation are using it. Which means that, given the choice, it’s probably better for one’s future success to have 100 people from the same industry or niche using one’s software, than to have 100 people randomly distributed in different careers around the world.

So where does that leave Semantic MediaWiki? (You could ask the same thing about MediaWiki as a whole, which is another interesting question, but for the sake of simplicity here I’ll just assume that the ideal installation of MediaWiki always involves SMW, and thus that the two are synonymous.) Our usage is right now essentially random, with some interesting pockets of adoption in different markets but no real pattern to it. There’s definitely an argument that we’d be better off focusing our marketing efforts. By “marketing” in this case, I mean the wording we use to describe the software, the kind of marketing documents we create, the types of conferences we attend, the blogs and publications we pitch to, and even potentially the kinds of peripheral software created.

The obvious next question, then, is which market, or markets, should be targeted. Interestingly, pharmaceutical companies – the niche Documentum started with – might be a good choice for us too. Pfizer already uses SMW, and another one, I’ve heard indirectly, almost started to use it for a major project, backed out a few months ago, but still may end up using it later. Pharmaceuticals have a combination of massive amounts of data; an always-changing data structure (because the concepts being discussed are always in flux), which makes creating industry-specific applications difficult; and massive amounts of money. The first two make SMW a good tool to use, and the third makes it definitely worth our while to get involved.

Another interesting niche is biotech companies, for much the same reason.

Another strong possibility is one or more sectors within the U.S. Government. There, too, there are massive amounts of data, and, for better or worse, massive amounts of money (at least at the federal level). WikiWorks has already done Semantic MediaWiki-related setup work for some U.S. federal agencies, and we hope to do more.

So those are some potential niches; what about a branding strategy? To the extent that there’s a single, identifiable competitor, it seems like it would have to be Microsoft SharePoint. So what’s SMW’s one advantage over SharePoint? Is it like SharePoint, but open-source? That’s certainly catchy, but there are already various other applications that bill themselves that way – which doesn’t mean, of course, that SMW can’t join them. (Very interestingly, the main application listed in that article, Alfresco, is, from what I understand, an open-source update of Documentum, produced by some of the same people. It’s all come full circle! Somewhat less interestingly, I once met the guy from Alfresco quoted in that article.)

SMW could also be “like SharePoint, but Semantic Web-enabled”, which is undeniably true, but maybe not very compelling at this point. And it could be “like SharePoint, but a wiki”, which is, I think, easier to understand, and possibly more compelling.

Any thoughts from the crowd? Comments, as always, are welcome.

Update: I just want to clarify that I’m in no way dismissive of all the many varied and interesting uses that Semantic MediaWiki has gotten so far, including among our clients and including wikis run personally by various WikiWorks members. I hope SMW continues to get lots of usage in all different sort of fields – all I’m trying to discuss here is where to focus future marketing efforts.

Categories: Uncategorized.

Notes from the Wikimedia Data Summit

Update: See the comments below for some important corrections/clarifications from Erik Möller.

On Friday I attended the “Wikimedia Data Summit”, which was held at the O’Reilly Media offices in Sebastopol, California. It was a one-day event, that was actually more like three summits, on three different topics. The first, and the one I was involved with, was meant to discuss plans for making the Wikimedia projects, mostly Wikipedia, more data-focused: essentially, the “Semantic Wikipedia” dream that some people have had for a long time. The second was a discussion on how to improve the web analytics of Wikimedia sites: improving the info and visualizations gathered about page views, editing statistics and all that. And the third was a smaller discussion about improving MediaWiki’s wiki-text parser.

The day started with everyone in the same room, first with an introductory talk by Erik Möller and then with 10-minute talks by various people related to each of the three main topics. Six of them were related in some way to Semantic MediaWiki:

  • I was tasked with explaining all of SMW and its family of extensions in 10 minutes (which I sort of managed to do, barely)
  • Denny Vrandecic talked about Shortipedia, his new-ish SMW-based research project
  • Michael Erdmann from ontoprise talked about SMW+
  • Christian Becker, best known as one of the DBpedia people, talked about his work on the SMW+ Linked Data Extension
  • Mark Greaves from Vulcan talked about Ultrapedia, Vulcan’s demonstration of an SMW-annotated subset of Wikipedia
  • Michael Dale talked about the MetaVid extension and website

You can see the notes from that first session here; and, if you’re curious, you can see the slides from my talk here.

We then split up into three groups, and I joined the “Semantic Wikipedia” group, which was led by Erik, and included the six SMW-related people, as well as people from Google, Freebase (although that’s technically Google too), DBpedia and some other groups. I volunteered to be the note-taker for the discussion, and you can see my notes here.

The basic goal, as Erik defined it, is that they want to create a “Wikimedia Data Commons”, in the spirit of Wikimedia Commons – a single site that would serve as a data repository for all the different language Wikipedias. Facts would be entered there in the form of semantic triples, although really they would include more than three things – besides the fact itself, there would be data like the source (which might include a URL), the date, the language that the information is in, etc. Every Wikipedia could then somehow query that data in order to display it within infoboxes; and the data could also be queried by outside applications.

Ultimately, this project is Erik’s own, and it became obvious early on that his preferred approach was to use Denny’s Shortipedia as a basis; actually, it turned out that Shortipedia had originally been developed, about five months ago, after discussions with Erik. If I can give my own perspective on Shortipedia: it’s a fact-based approach to data, as opposed to Semantic MediaWiki’s generally page-based approach. As far as I know, Shortipedia uses SMW mainly for its pre-existing database tables, as opposed to making use of much of the logic in the SMW code. Actually, Shortipedia reminds me more than anything (as I told Denny when he first showed it to me) of other semantic wiki applications I’ve seen, like AceWiki and OntoWiki. In the non-SMW semantic wikis, the semantic stuff is generally kept separate from the other content, and there’s usually some sort of mini-form that lets you keep adding triples to a page. That approach generally, from my experience, works poorly for individual wikis; but for a massive data repository like the “Wikimedia Data Commons”, it may well be the right way to go. So there you have it: if you want to see the future of data on Wikipedia, Shortipedia might be it.

As far as the event itself: if I had planned it, I would have done various things differently. The major issue was that I didn’t think all three events should have been held together. The three topics are rather discrete, and I didn’t think there was much “synergy” generated by putting them all under one roof. Worse than that, it meant that various people who ideally would have taken part in two or all three of the meetings (like many of the Wikimedia development staff) could only be in one.

As for the the semantic/data part of things, advance notice that Shortipedia was the way things would probably go would have been helpful – I would have spent more time beforehand thinking about the ramifications of it, something I only really started doing today. As it was, the most helpful thing I contributed at the time was probably my note-taking skills (which I’m proud of, by the way!)

Anyway, the big takeaway from the meeting wasn’t my somewhat underwhelming experience, but rather the fact that there now appears to be at least the beginnings of a path forward for data on Wikipedia. That’s probably the more important news, in the greater scheme of things. :)

Categories: Uncategorized.

Wikipedia turns 10!

Wikipedia is turning ten years old tomorrow, and a variety of people and publications have been writing about what it all means. You can see one roundup here. I’ve read a bunch of the articles and essays, and many of them seem to follow a standard template: a summary of Wikipedia’s beginnings, a sample of some of the obscure topics it covers, followed by thoughts about its future, including the usual concerns that its editor community is still too insular, too male, too first-world, too geeky. (All I can say about those concerns is that, if every technology-related project were critiqued on those four terms, there would be a lot of critiquing going around.)

One essay I did like was Clay Shirky’s, in The Atlantic, where he talks about the role Wikipedia has played in changing our perception of authority (in the knowledge sense of the term):

Defenders of traditional authority will object to the relativism of all this, but relativism is all we’ve got — the rise of the scientific method has taken away certainty and replaced it with nothing but process and probability. An authority isn’t a person or institution who is always right — ain’t no such animal. An authority is a person or institution who has a process for lowering the likelihood that they are wrong to acceptably low levels. And over the last ten years, Wikipedia has been passing that test in an increasing number of circumstances.

On a different note, as someone who makes his living off of wikis, I can say with little doubt that my career is due to Wikipedia. Most obviously, WikiWorks deals exclusively with MediaWiki, which is the software that powers (and was created to power) Wikipedia; but even disregarding that, I don’t think I, or the clients we work for, would have been involved in wikis if not for Wikipedia. Before I was a wiki developer and consultant, I was a wiki administrator; and before that, I was a Wikipedia contributor, and before that, a Wikipedia reader. I got into wikis as a direct result of my increasing fascination with Wikipedia over the course of late 2004 and 2005. It seemed very satisfying to me to see text get shaped by lots of editors, most of whom would never meet each other, on the path to some sort of platonic ideal of what each article should look like.

As for our clients: it’s doubtful to me that many companies or organizations would be using wikis if not for Wikipedia: both because it’s taught everyone what a wiki is, and because it’s served as the ultimate proof-of-concept – that the crazy-seeming notion of a site where anyone can change anything at any time can produce a really well-crafted set of information.

Its influence on the world of wikis is probably just a small aspect of the impact of Wikipedia: after all, from my real-world experience, the large majority of people who read Wikipedia on a regular basis have (sadly) no idea that any other wikis exist. Still, it’s important to me; and for all of us working now on the technological future of wikis, maybe we’ll be able to return the favor and teach Wikipedia a thing or two over the next 10 years.

Categories: Uncategorized.

New Vector-based skin complete

We completed a new skin for The idea was to have the wiki’s skin seamlessly integrate with the rest of the site, with a few modifications. We were able to do that pretty well, using the Vector skin. The tabs were a bit tricky but worked out pretty well overall.

See the new wiki skin here. (The skin has since been revamped by a fellow WikiWorks consultant.)

Categories: Uncategorized.

Google DataWiki and Semantic MediaWiki

The Google DataWiki application was released about a week and a half ago, and it’s gotten some buzz since then, on Twitter and the like. It certainly caught my eye: it bills itself as “a wiki for structured data”, which is also a reasonable description of Semantic MediaWiki. And like Semantic MediaWiki, it’s available both as a hosted solution and as open-source, downloadable code.

Let me first say that, in my opinion, the attention DataWiki is getting seems almost entirely due to the fact that it’s from Google. It looks slick, but there’s currently a major gap in the functionality that I think renders it unusable: you can edit any piece of data, but you can’t see the version history for the changes that have been made; that means that information that’s removed can’t then be restored. It’s debatable whether an application without a version history can even be considered a wiki application at all.

Nevertheless, it is a Google project, which means that, besides the spotlight it gets, there’s a good chance that the software will improve. And as simple as the application is, there are already some nice features to it that the Semantic MediaWiki community could really benefit from looking at.

I think one of the major problems with Semantic MediaWiki and the related extensions is what you could call a lack of “wizards”: there are tools that let you create data structures, but unless you really grasp the concept of semantic properties, as well as a variety of MediaWiki constructs like templates, you really first need to read through the documentation (such as it is) before you can get started.

Contrast that with DataWiki: to create an individual data structure (or what’s referred to as  a “Dataset”, which in turn has a “Format”), you just click on a single link, which in turn takes you to a form, where you get a graphical tool that lets you add fields, which appear like form fields so that you can easily tell what’s going on:

DataWiki field creation GUI

(Microsoft SharePoint has a similar, non-web interface for creating data structures – that might have been the inspiration here.)

Then, once you create the set of fields, everything is right there on the same page – the set of fields, the list of pages created with this data structure (here called “Documents”), a form for adding new documents, and a form for finding existing documents. They all magically appear as soon as the data structure is set up. And, if that weren’t enough, the page also doubles as a web-based API, which lets you query any of the data remotely.

In Semantic MediaWiki, all this same functionality exists, but it would be spread out over many pages – one for the category, one for the template, one or more to display all the data in a table, calendar and/or map, two form pages – one to add data and one to allow form-based querying of the results, the Special:Ask page for remote querying, plus pages for all the semantic properties.

I should note that, despite how cool the interface looks, there are still major limitations – besides the lack of version history, there’s also no way to have fields of different types: no textareas, checkboxes or date inputs.  And the querying and searching is extremely limited. But its simplicity of interface is definitely a model to aspire to.

Personally, I think the way forward for Semantic MediaWiki to able to achieve this kind of ease-of-use is my proposed Semantic Schemas extension, which would let you store everything about the data structure in a single piece of XML on a wiki page. That way you could, in theory, have a nice graphical wizard for both creating and modifying structures, and you could set up more functionality from the beginning without requiring users to explicitly create, say, forms, or table-display pages. But then, I’m biased.

Categories: Uncategorized.