Thursday, May 31, 2012

#WMDEVDAYS - #Wikidata

Presentations on Wikidata have started. There are enough thought provoking ideas here. Daniel Kintzler presents on the interlanguage links and they did think on how to make information stored in the info-boxes.

Consider a Wikipedia with less then 1000 articles. It is extremely likely that they do not have information about pope John III for instance. When you analyse this English info-box, it has 9 labels. Once these are translated, much of the existing data can just be presented. Date formats can be altered as needed based on the CLDR data. The names may need to be changed based on how a person is known; Benedictus is known as Benedict in English.

Obviously once all this information is validated, you can either create an article or you can provide the template as part of the "not found" information. An other option is to create the article when all the information is validated and show an incomplete info-box when the box needs more work.

We want to share the sum of all knowledge and making use of the info-boxes has potential.
Thanks,
    GerardM

Wednesday, May 30, 2012

#TED and an EUREKA moment for Open Content

Archimedes is one of the best known Greek scientists. People learn his principle in school and are often able to quote it for the rest of their life. When we write articles in Wikipedia, sources are considered to be important. This TED presentation reveals how little we actually know. How little or how badly we can refer to the past, how little we truly understand the relevance of even people like Archimedes.


When you have an interest in sources and in the preservation of knowledge, you will love this presentation. When you are looking for arguments why museums, archives, libraries should make their research, their digital scans available to the public under a free license, you should watch this presentation completely.

When you are involved in GLAM and Wikimedia projects, at the end of this presentation you may come to an understanding why it is so important to acknowledge the museums, the archives, the libraries for their work they do on the works we refer to.  Yes, much of what they conserve is out of copyright but they provide the provenance to much of our material past. A past that brings the original artefacts to where the digital manipulation / re-use takes off. It is where our digital world connects to the physical world. It is where the sharing of the sum of all knowledge has its origin, its sources.

These sources may provide us with even more knowledge as our ability to research objects increases. What we do not know is what material to value. The prayer book featured in this video was technically a "write off" and now provides us with a single source to works of ancient Greece we did not know about.
Thanks,
     GerardM

Monday, May 28, 2012

#Language, #script, #Unicode, #font and web fonts

Making the Internet globally accessible is more then running cables. It is also about making sure that you can read and write any language. Once all this is in place, people enabled in this way can share in the sum of all knowledge.

There are few people as intimately involved in supporting languages than Michael Everson. He is known for encoding scripts into Unicode, this requires both technical and linguistic expertise and finally he is a publisher of books written in minority languages.
Enjoy,
     GerardM

Michael at Chogh Zanbil - Cuneiform .. :)
Are all scripts registered yet ... do we know them all in ISO 15924?
No, not at all. The best-known scripts have been given four-letter codes in ISO 15924, but we tend to be conservative for lesser-used scripts, and try to co-ordinate with proposals for encoding them in the Universal Character Set (a.k.a. ISO/IEC 10646 or Unicode).

Several scripts are not yet encoded in Unicode. Many of them are used by living languages.. How do languages cope?
A script (or character) not encoded can't be used in interchange. People can either use the Private Use Area or hack an existing encoding.

What does it do to the cultures involved ?
The lack of an encoded script prevents a language from using its script effectively in any computer environment.

Is it known how many scripts used by living languages are not yet encoded ?
I don't think we have kept a quantitative inventory. And we always discover something new. I know of a number of specialist scripts like SignWriting and Blissymbols which have not been encoded. We are working on some other scripts, like Woleai and Afáka, and a number of West African scripts, but it is very difficult to contact the user communities to get feedback. There is a huge technological divide. (Not for SignWriting or Blissymbols: for those the problem is a lack of funding to do the work.)

Is it known how many scripts used by dead languages are not yet encoded ?
Again, we don't keep a quantitative inventory. The Roadmaps on the Unicode site are as good a checklist as anything.

Several scripts are encoded but there is no freely licensed font for them. Why is this not part of the process of encoding for Unicode ??
The Universal Character Set is a character set. Both ISO/IEC JTC1/SC2/WG2 and the Unicode Technical Committee work to study character and script proposals, give the characters the right properties, and get them encoded. It is not the function of either committee to establish implementations, or to give them away. The work is already voluntary (and expensive).

MediaWiki supports web fonts ... What relevance does this have for you, what opportunities are there for the Wikimedia communities
It is a great opportunity for Wikimedia to exploit some of the generosity of the many people who have donated to the foundation, and to make good use of the skills of people who have expertise in the Universal Character Set and in font design.

What impact will the availability of freely licensed fonts have on the availability of information in those scripts
For instance, right now anyone viewing any Wikipedia in any language may encounter text in Ol Chiki, or in Runic, or in the simple International Phonetic Alphabet, and pages have to apologize to the reader because their computer may not display the material correctly. This is *bad* for the encyclopaedia.

What difference would it make if the Wikimedia Foundation were to become a player in the development of fonts
People using the encyclopaedia would be able to see the information without worrying about seeing ☐☐☐☐☐☐ ☐☐☐☐☐! From a personal point of view, I can say that at various conferences over the past two years, I have spoken with people in the Wikimedia Foundation, and with people from another very large organization, about this matter -- specifically about exploiting my own expertise in the Universal Character Set and in the provision of rare scripts and characters in web fonts -- yet nothing has resulted. I think the message has got through. But so far no one in either organization has decided to take the necessary principled decision that in order to ensure that the information in the Free Encyclopaedia is actually available to people who use it, complete UCS support should be provided in a suite of freely-available and maintained webfonts.

Provenance is the basis for the establishment of facts. Is transcription in the original script essential ?
Why wouldn't it be? That's the source text. Encoding it correctly means that it can be interpreted by the reader if he or she wishes to consult the primary source. Anything else obliges the reader to use someone else's interpretation. Of course expertise is needed, but the closer one can get to the primary source, the better.


Michael, why "Alice's Adventures in Wonderland" ?
I love languages, and it has been a great honour for me to publish Alice for the first time in a number of minority languages which might otherwise never have seen the text. Alice is available in the following languages: Cornish, English, Esperanto (Kearney), Esperanto (Broadribb), French, German, Hawaiian, Irish, Italian, Jèrriais, Latin, Lingua Franca Nova, Low German, Manx, Mennonite Low German, Borain Picard, Scots, Swedish, Ulster Scots and Welsh  and several others translations are being prepared.

Reading #Arabic

Learning to pronounce texts in classic Arabic is an adventure. It is a challenge for me and I have been promised that I will be able to vocalise at the end of four day sessions and a month of practising at home.

It is fun as well. The sounds are not only different for me but also for my fellow students. One of them speaks Moroccan Arabic and is also struggling with the difference in pronunciation.

When you are taught about the structure of Arabic, you are taught that the "i" and its associated variations are written under the character it is associated with. In figure 1, it is combined with a "shadda" on a "ya". My teacher explained that he is not able to write it in this way with Microsoft Word and, that you will find it as you can see in figure 2.

The documentation for the Arabic script at Unicode explains: "computer fonts often follow an approach that originated in metal typesetting and combine the kasratan with shadda in a ligature placed above the text".

My teacher wants to control the way the characters show. It makes the study material consistent for his students. Showing the kasratan with shadda above the text is then something that can be taught when the basics are understood. The question is does he need a different font or does he need a different word processor.
Thanks,
      GerardM

Thursday, May 24, 2012

Supporting a #font for #Arabic III

 a new contextual shape for
a faa-yaa combination
The "Amiri" font is available as a web font for the Arabic language in the projects that have the WebFonts extension enabled. One example of the Amiri font in action is on the English Wikisource.

The recent release of the "Amiri" font improves the readability of Arabic texts and it includes Latin characters as well. The readability improvements are very welcome. These Latin characters however are excess baggage when you use Amiri as a web font.

For many people the availability of a web font for the Arabic script is news. The new release will have to be assessed for its technical aspects. Butchering the font and remove the Latin script seems obvious. It just needs doing,
Thanks,
     GerardM

Wednesday, May 16, 2012

Turkish Lira is supported in #Unicode

When #Turkey selected a new symbol for its currency, the Lira, it had to make sure that people can actually use it. Given that almost all modern computing is done with Unicode fonts, it was important to have the symbol included as soon as possible in Unicode.

The Turkish Lira will be supported in the Unicode 6.2 release that was just announced for the third quarter of 2012. The next step is to have the symbol included in fonts. A font that includes the new symbol can already be found on the website of the Turkish central bank.

In a previous Unicode release the Indian Rupee was introduced. The question is very much to what extend and at what pace people will have updated fonts that includes such symbols. The Wikipedia article on the Indian Rupee uses an image.

It is possible to create a font that includes characters like these currency symbols and make use of the WebFonts extension. In many ways it is more elegant than using graphics.
Thanks,
      GerardM

Tuesday, May 15, 2012

Legacy projects II

When tools are no longer maintained, when they are no longer promoted, they die. The good news is that the tools created for the Myanmar language created by Keith Stribley have found a new home.

The question is to what extend are they salvageable and is it worth the effort. For software there are two primary considerations; copyright and licensing and, the source code.


Without source code the first question is less relevant and, the reference to the mercurial repositories is a reference to Keith's old website. This old website is gone. There are references to the CC-by-sa license on some pages but for software such a license is not really appropriate.

When you look at the subjects Keith covered, they are still very much relevant and impressive. At the Myanmar Wikipedia two out of three webfonts do not work for me. We do not have an input method yet. It will be great when we find that the tools Keith created can find another use.

The result of my previous post is this update. I hope that there will be a future update to this post bringing you more positive news.
Thanks,
     GerardM

#CLDR will know language names in #Esperanto

#MediaWiki uses the language names as defined in the CLDR. It is therefore important that people compile a list of the translations for their language make them available to be used as the standard translation.

Arno did exactly that. The list he created contains all the codes as used for Wikipedia combined with the translation in Esperanto. It is an important effort and it would be great when we have such a list for all the other Wikipedia languages as well.

As standards are standards, we were asked to provide the list in an XML format. It took some pastes and find and replaces and it looks good. The only problem is that some of the codes used are not standard codes. Several codes have been removed, "als" for instance is the code for Albanian Tosk not Alleman. There may be some other "language codes" in there that are not recognised in a standard.

When your language can do with additional translations, please follow the Esperanto example and provide a list of translations in your language.
Thanks,
      GerardM

Monday, May 14, 2012

Legacy projects

Last year, I reported the death of Keith Stribley. Keith was well known for the support he provided for the Myanmar language.  Keith used his own website where people could download the tools that he had written.

Today, I tried to visit his website. It was gone. I googled for some of his projects, I did not find them. It is truly sad when there is little to find of the good and relevant work that Keith did.

I am sure that Keith's projects are not the only relevant projects that came to an all too abrupt halt. I can only hope that someone will prove me wrong with regards to Keith's projects.
Thanks,
    GerardM

Benefit from the #MediaWiki Babel extension

At #translatewiki.net a Northern Sami Wikipedian asked for and was given translation rights. When a person wants to localise in a language, we ask to provide us with information about the person's proficiency in the languages he or she knows. At translatewiki.net we use the Babel extension for this.

When the Babel extension is not localised, I often ask people to localise Babel. This time the person had not provided us with the opportunity to send him an e-mail so I went to his profile on the se.wikipedia.org to ask the question.

On the profile page I found his Babel information provided by templates and as you can see to the right, many templates are missing and some like the one for Swahili and Japanese are incomplete. I copied his Babel information to my talk page. For some of the languages there is no localisation but the information is now at least readable in English.

The templates provided me with enough information to localise some of the Babel messages in Northern Sami. All that is left is for these messages to be distributed to the Wikimedia projects.
Thanks,
    GerardM

Saturday, May 12, 2012

#Font subsets IV

After installing #Fontforge, I still have to use it for real. The problem I want to solve is to reduce an existing font in size and reduce its use to only one script.

There are many fonts around that include everything and the kitchen sink too. When the only thing that is needed is a specific font for use on a specific webpage, it does not make sense to send the excess bulk as well.

Fontforge as a tool is intended to build fonts. My intention of reducing a font in size is a use case that seems not to be what the tool is there for. I do not know the tool really so I am looking for help. The ultimate goal is to have efficient web fonts for every script.
Thanks,
     GerardM

Friday, May 11, 2012

#MathJax is looking for #money

#Mathematics is said to be a language of its own. When you look at a formula, all the logic is in the formula and any text only aids in the understanding. Who cares for text..

This must have been what the developers of MathJax had in mind when they wrote their application. It is all about mathematics and all the rest is obvious eh, English.

MathJax CDN is what the people behind MathJax are asking money for. It provides a service that gives you beautifully presented mathematical formulae in your browser. Because MathJax knows YOUR browser, it does the best for the web page that is provided.

To finance the CDN service, they are asking for donations. I do understand English so I can inform you that at the time of publishing this blog post they had $370.00 in pledges.

Sadly both the MathJax software and the MathJax website are not internationalised. As you can read in this mail, the software will need to be adapted to allow for localisations. When the MathJax developers work together with the fine people at translatewiki.net, they will find people who can advise them on how to internationalise. Recently the JavaScript used by MediaWiki has been adapted to use grammatical gender and plural. Possibly the code or the expertise can be used for MathJax as well.

Once MathJax is internationalised it will be localised in many languages .. There are many proud mathematical cultures and traditions outside of the English speaking world. Given that MathJax is already more or less usable in the projects of the Wikimedia Foundation, translatewiki.net is the obvious choice for the MathJax localisation.

When everything is said and done, the request for donations will be made in other languages as well. It surely helps when you address people in their own language.
Thanks,
      GerardM

The #OpenID challenge

At #Translatewiki.net a request was made to support OpenID. The beauty of OpenID is that it reduces the number of websites that store your password. This makes browsing the Internet arguably safer.

The translatewiki staff is hesitant to support yet another nice to have extension. It has been burned by accepting LiquidThreads in the past. LiquidThreads is a great idea and it provides a much better user experience but it has not been properly supported. There is a promise for a release somewhere in an unspecified future.

Wikinaut did take over the OpenID extension support. He provided patches updated the documentation and equally important, he runs it on his own MediaWiki wikis. The need for support seems to be fulfilled, the question is not only if translatewiki is interested but also if the WMF is interested in providing improved security.
Thanks,
     GerardM

Tuesday, May 08, 2012

Birthday card

How many people work for the WMF or are active in the chapters is over 50? This was one of the questions on one of the mailing lists. As I received a really nice card from Valerie Sutton for my 53th birthday, I obviously qualify for that age group.


The card shows "Happy birthday" in the Flemish sign language and given that it is a different language, "Happy birthday"" is as accurate a translation as "Hartelijk gefeliciteerd". SignWriting, the script used in the card, is not yet encoded in Unicode.

The ambition for Wikipedias in sign languages has not diminished. Recently there was an update of the script engine and, one argument for this change is that it will make a MediaWiki extension for SignWriting easier.
Thanks,
      GerardM

A #font for ancient scripts


#Wikipedia refers to and #Wikisource includes sources that were written millennia ago. Sources that inform us about when what is now the western culture was centred in places like Athens, Crete or Karnak. The scripts used were different from modern scripts but like modern scripts many of them can and did make the transition into the digital age. They were encoded in Unicode and there are fonts available for these scripts.

George Douras has been really active in the creation of fonts for ancient scripts. There are many available from his website. The message about licensing is plain and simple and is probably all that is required. George is happy to see his fonts used as webfonts and is willing to help when there are issues with his font.

The list of what he has on offer is impressive:
Aegean Numbers, Alchemical Symbols, Anatolian Hieroglyphs, Ancient Greek Musical Notation, Ancient Greek Numbers, Ancient Roman Symbols, Arkalochori Axe, Arrows, Basic Latin, Block Elements, Box Drawing, Braille Patterns, Byzantine Musical Symbols, Carian, Combining Diacritical Marks, Combining Diacritical Marks for Symbols, Combining Half Marks, Control Pictures, Coptic, Counting Rod Numerals, Cretan Hieroglyphs, Cuneiform, Cuneiform Numbers and Punctuation, Currency Symbols, Cypriot Syllabary, Cypro-Minoan, Cyrillic, Cyrillic Supplement, Deseret, Dingbats, Dispilio tablet, Domino Tiles, Egyptian Hieroglyphs, Egyptian Transliteration characters, Emoticons, Gardiner set of Egyptian Hieroglyphs, General Punctuation, Geometric Shapes, Gothic, Greek and Coptic, Greek Extended, Hieratic alphabet, IPA Extensions, Last Resort font glyphs, Letterlike Symbols, Linear A, Linear B Ideograms, Linear B Syllabary, Local variants of Ancient Greek and Old Italic alphabets, Lycian, Lydian, Mahjong Tiles, Mathematical Alphanumeric Symbols, Mathematical Operators, Maya Hieroglyphs, Meroitic, Miscellaneous Mathematical Symbols-A, Miscellaneous Mathematical Symbols-B, Miscellaneous Symbols, Miscellaneous Symbols and Arrows, Miscellaneous Symbols And Pictographs, Miscellaneous Technical, Musical Symbols, Number Forms, Old Italic, Old Persian, Optical Character Recognition, Phaistos Disc, Phoenician, Phrygian, Playing Cards, Sidetic, Spacing Modifier Letters, Specials, Superscripts and Subscripts, Supplemental Arrows-A, Supplemental Arrows-B, Supplemental Mathematical Operators, Supplemental Punctuation, Tai Xuan Jing Symbols, Transport And Map Symbols, Troy vessels’ signs, Ugaritic, Yijing Hexagram Symbols, Text Fonts based on the work of Firmin Didot (1764-1836), Richard Porson (1757-1808), Victor Julius Scholderer (1880-1971), Alexander Wilson (1714-1786), Claude Garamond (1480-1561), Demetrios Damilas (c. 1493), Robert Granjon (1513-1589) et al.
The MediaWiki WebFonts extension is enabled on Wikisource. It is therefore for the Wikisourcers to make their pick and ask on Bugzilla for a specific font.
Thanks,
     GerardM

Monday, May 07, 2012

Four essential questions

#MediaWiki supports #Unicode and any script that is supported with a freely licensed font is a candidate to be supported by the WebFonts extension. As I was looking for freely licensed fonts, I came across a charming web site. It provides translations to four questions that are really relevant to travellers:
  • Where is my room?
  • Where is the beach?
  • Where is the bar?
  • Don't touch me there!
The list of languages and scripts is impressive. It is great to know how to ask these questions in 538 languages; understanding all the potential answers is something else again. In order to show the questions, fonts are needed to represent all the scripts involved. Originally this was part of the project:

The Gallery of Unicode Fonts was created by David McCreedy and Mimi Weiss in March, 2004 as part of their Four Essential Travel Phrases website. In October, 2006 the site was ceded to WAZU JAPAN. Wazu.jp is a really interesting source of information for fonts for many scripts.
Thanks,
     GerardM

Friday, May 04, 2012

Font subsets III

#Google web fonts is important. A major player makes freely licensed fonts available for general use. They allow people to be more expressive because the right font adds to the message. They allow people to contribute to existing fonts and make new fonts available.

Both Google and the Wikimedia Foundation invest in freely licensed web fonts and because of this, an opportunity for cooperation exists. The Wikimedia Foundation supports all languages in all scripts particularly for use in its projects and Google supports the world with its Google docs where people can use web fonts in their own documents and in their own language.

As the two organisations complement each other so well, it would be great when they share their web fonts. Wikipedia can become more expressive and for Google docs usable fonts can be associated with the languages that they support. A combined outreach of Google and the WMF will make web fonts even more visible and it will make the effort of both organisations even more relevant.
Thanks,
     GerardM

#Font subsets II

A request by Glanthor made on the Wikitech list was to support Junicode as a web font for historic texts. When you read about the Junicode font on sourceforge, you find that it supports 3250 characters in the "regular style". Its speciality is supporting medievalists in their work and it supports many scripts. It is a really big font.

Arguments to support Junicode:
  • it is a freely licensed font
  • it has a clearly defined use case
Arguments against the use of Junicode as a web font:
  • it is really big
  • it does not target one script
  • it is only one font for a script
When you read the Wikipedia article on the Runic script, it becomes abundantly clear that the Runic script evolved over time. Not only does the shape of characters change, the number of characters used is different as well. This is perfectly normal and it argues against the use of a single font for a single script.

As Junicode is freely licensed, it is possible to break the font up in pieces and have a separate font to be used  for Runic and another for Gothic. When other fonts for Runic are available as well, we can show sources with a font that resembles the original best and provide a more familiar font for easy reading as well.
Thanks,
       GerardM

Thursday, May 03, 2012

#Font subsets

A font with all the characters for the Latin or Cyrillic script is big. Over a mega byte big. This is considered too big for use in a web font particularly when mobile devices are targeted as well. For this reason, moves are under way to split mega fonts in subsets.

At SIL they are working on font subsets. Their criteria is to include all the characters used in a given "region".  In this way they explicitly target a range of languages. It does reduce the size and one font can be used as a web font for all these languages. When they are to be used on Wikipedia, it will still be necessary to identify the specific language and have the language associated with a particular region.

Typically a font does not include all characters anyway. It is created with a language in mind and when another language needs extra characters, it is tough. Many languages do use the same subset of characters and when a font is identified as complete for one language, it follows that it is complete for all other languages as well.

SIL needs a way to subset its existing fonts. Google in contrast provides many fonts as web fonts that provide subsets of the Latin script. As it is not made obvious if these fonts support languages like German, French or Dutch, it is not really attractive when English is not your language.

Both Google and SIL provide solutions. The key question they do not explicitly answer is: does it support my language.
Thanks,
      GerardM

A #CLDR walkthrough

For #MediaWiki, the CLDR information is important. Sadly for many of the languages supported in a Wikimedia Foundation project the information is not (yet) available. Several things are needed;
  • People who know their language well enough to enter the data
  • People who know their language well enough to verify the data entered
These people exist for any language. The question is what does it take for people to enter the data. For the Asturian language, a language from Spain, the data is now being entered. 

One of the things that may help is instructional material and, it is quite wonderful that an instruction video has just been released. To quote the message announcing the material:
A new 52-minute walkthrough video is now available, showing how to use the CLDR Survey Tool to enter data, and prioritize your work. The video and explanatory material are available on this CLDR site page.

So please watch the video and do what you can do for your language.
Thanks,
     GerardM

Wednesday, May 02, 2012

Buying #software

The software industry insisted that you do not buy a product but that you acquire a license. According to the highest court in the Netherlands this is not the case. In a blog post (in Dutch) Arnoud Engelfried informs about the consequences.

  • As it is a product that is bought and sold, the supplier has to conform with all the rules that apply to the sale of goods
  • Software can be expected to provide working functionality
    • When it does not, the supplier has to amend the situation
With this change in the status of software all the mechanisms that consumers have will kick in. Including rules and regulations around sale over the Internet (you can return such goods within a given time-frame).

It will be interesting to learn how the software industry will respond.
Thanks,
     GerardM

#Money for #Wikimedia related projects

Are you passionate about this one idea, this one project that will make a difference in your opinion? Do you know how to run such a project and it is the lack of money that prevents you from executing this plan?

When this description fits you, you should know that the board of the Wikimedia Foundation passed a resolution to create a "Funds Dissemination Committee". The guidelines for this committee are being set up and they want your input.

The cool thing is that there will be a process to get more done and financed. Obviously it needs to arguably be a project that will benefit the Wikimedia movement. At this time you can help define the arguments. This project is run by the Wikimedia Foundation and the Bridgespan Group.

There are two things you can do:
  • Help create a sane funds dissemination process
  • Plan and prepare "must have" projects that need funding and that you can run
What is really interesting is that it will be Wikimedia, not Wikipedia related. This means that all the existing bright ideas about the other projects can be reconsidered. When you are able to sell your idea to this new committee, you may find yourself realising a long held ambition.
Thanks,
      GerardM

Tuesday, May 01, 2012

65 localisations needed for #Cherokee

At #translatewiki there is a requirement for a minimum number of localisations. The objective is to find at least some localisations in the user interface when a language is selected.

Cherokee is one of the languages that does not have the required minimum of 65 localisations for MediaWiki. Some of the localisations that do exist are in need of attention; one of our localisers identified several messages that needed his attention.

As the required minimum of 65 messages is not reached, these improvements are not finding their way into the MediaWiki code base. To resolve this, we need someone who is competent in helping us with the localisation of MediaWiki in Cherokee.
Thanks,
      GerardM

If it is about getting the message out ...

The blog post about  #FarmAfripedia got quite a lot of attention. It was republished at the Kabissa blog giving it more exposure in Africa. It was commented on in several places. Sadly the most relevant message is that it will be almost impossible for FarmAfripedia and Wikipedia to cooperate.

The problem is with copyright and licensing. The use of the information on FarmAfripedia is restricted  by the CC by-nc-sa license. The Wikimedia Foundation projects are available under a CC by-sa license, this does allow for commercial use. The WMF rationale is that the information it is the custodian of to as many people as possible. This is why commercial use is allowed, this is why we cooperate with telephone companies to make our content available free of charge on mobile telephones in our Wikipedia Zero projects.

FarmAfripedia uses the CC-by-nc-sa license because it makes it easier to use material from the United Nations. The aim of the UN is very much to get its message out. Using a license that prevents cooperation and re-use down stream is clearly not in its interest.

There is no right or wrong. It is only a sad realisation that incompatible licenses prevent the kind of cooperation that is in the best interest of everybody involved. FarmAfripedia may reconsider its licensing, the UN may reconsider its licensing but for the WMF it is all about getting the message out as widely as possible.
Thanks,
     GerardM

Now at #translatewiki: #Wikidata

Once software makes its presence felt at translatewiki.net, it changes from a talking point into an actionable item. Wikidata is now a reality for the localisers at translatewiki. There are messages that can be translated and commented upon. They are being translated and the first comment, a typo in the message text, found its way on the [[Support]] pages.

Wikidata has the potential to make a huge difference particularly to the smaller Wikipedia projects. When this functionality is localised, it will makes the Wikidata usable to all members of all Wikipedia communities.

It will be interesting to learn if the Wikidata demonstration installation makes use of the LocalisationUpdate process. Without LU working properly there will be no daily updates with the latest localisations from translatewiki.

This same question can be asked for any and  all the projects that have a presence on one of the Wikimedia Labs servers. One of the original use cases for the LocalisationUpdate extension is for it to work on any and all MediaWiki installations. When it works, a MediaWiki installation never mind if it is in India, Russia or Germany will feature the latest localisations. This provides a powerful incentive to ensure that the relevant localisations are always up to date.
Thanks,
     GerardM