There is a big thing about bot generated articles based on the taxonomy of species. My take is that this whole discussion is of the rails. It is off the rails because people do NOT understand the vagaries of taxonomy. It is off the rails because people forget what we aim to do; providing knowledge by sharing information.
I loved what Erik Zachte had to say; he wrote about uploading images to Commons and finding that there are no articles in any Wikipedia about the subject of the pictures. It changed his perspective on bot generated articles. When enough information is available to a bot, it can generate articles on 280+ Wikipedias. The alternative is not providing information on the subject in any language.
Another loud argument was about taxonomy; article nbr 1,000,000 on the Swedish Wikipedia is about a species that was recently renamed. As some people would have it, the information was no longer “valid”. One counter argument is, when people know a specimen by the “old name” there would be no information to be had. Another counter argument: from a taxonomy point of view validity of a name is only in the quality of the publication and as a consequence, the old name is valid. To make this point abundantly clear; Homo sapiens is what most people know for the taxonomical name for a human being. I am not completely sure and I care not that much but I seem to remember that “Homo sapiens sapiens” is what has been used more recently in taxonomy for us "thinking men".
Let’s cut the crap and analyse the situation:
- Many Wikipedians hate stubs without any consideration for the opinion of others
- Stubs, particularly well-designed stubs are an invitation to edit them
- Our prime objective is to provide information
- In all the recent huha there has been little talk about technical possibilities
One solution for machine generated stubs is to have them in their own namespace and move them with the first human generated edit. This will not shut up all the detractors but it removes their arguments from them.
Another solution is to have the bots generate the information only when requested. It does not need to be saved, it only needs to be cached. Given that it is a bot generating the information, the script it uses can be translated for use in other languages as well.
Yet another solution is to have such scripts associated with Wikidata items. The information provided in this way would be truly complementary to what is available in Wikipedia. An added bonus would be that it will take away any room for Wikipedians to complain. Hm.. possibly, probably not.