What Scientific Computing Is Not

(Ref Id: 1395535781)

You've gotta love Scientific Computing. Notable people write for it; it has interesting and insightful articles. Many readers get a free electronic edition as well as free news mailing with interesting happenings around the world. For the busy professional it may appear to be one of the easiest ways to get one's fill of scientific computing information, but it is not the first or the last word in computing. Not by a long shot.

On Wednesday (3/19/2014) Michael H. Elliott posted 'State of ELN: Current Perceptions and New Paths.' If we skip past a fairly lengthy article to the References section near the bottom we find a book published in 1962 by T.S. Kuhn and two articles authored by, lo and behold, one Michael H. Elliott. So the author found his own prior writings and a fairly old book to be the only sources worthy of mention. What does this tell us?

The article itself talks about current perceptions and new paths in the ELN marketplace. The author appears to be a researcher, either past or present, as the by-line 'CEO of Atrium Research and Consulting' suggests. One could argue that the article uses research to arrive at its conclusions and that such sources are therefore too numerous to mention. This argument fails however when paradigms are introduced to interpret trends. For instance, it is perfectly okay to say, 'in a survey Atrium Research conducted last fall of major product users...' and follow that with some evaluation of your data. It is not okay to draw far reaching conclusions about technology trends like, "a typical technology adoption curve is presented in Figure 1, highlighting growth from early stage through mid-market to eventual decline." At least you should not be doing this without citing a source. Did the data suggest this conclusion or is some other source being utilized? One cannot tell which is exactly why we scrutinize sources so carefully (when given the opportunity to do so).

That an article with such a shallow list of references would be accepted for publication shines a light on what appears to be serious problems with the Scientific Computing editorial staff. Such a list of references cited in an academic publication would immediately raise the ire of reviewers and would trigger a subsequent demand for additional sources. Admittedly, Scientific Computing does not purport to be any sort of academic publication. But it does, like other publications of the same sort, purport to be an honest publication. That a submission was accepted by an author with a length equivalent to that of an entire term paper devoid of a single significant external resource should concern them. Worse, the appearance of publishing mere rehash from one's own previous articles should concern them even more. Again, what does this tell us?

In order to divine an answer to our questions we must examine the contents of the article in question. Some of the more interesting points it raises are as follows:

After these points (they all appear in the first three or four paragraphs) the conclusions become increasingly murky. For instance, there is a stipulation that "fluctuations of biopharmaceutical R and D had a big impact on new software sales." This is not to say that these conclusions are inaccurate -- just that they appear to require a copy of Atrium's research findings to substantiate. Of great interest to us in the DIY LIMS stratosphere is the following: "The lack of need for intellectual property protection and limited funding are stimulating an upsurge in the use of open source software and low-cost tools like Evernote. Pressure on government funding in the U.S. and Europe makes this trend likely for the foreseeable future."

So according to Mr. Elliott cost and the lack of need for important functionality are driving the use of open source. Perhaps it is time for Atrium to hire some new researchers. First of all, Web 2.0 tools like Evernote are not open source. Lumping them together with open source is foolhardy. Overlapping motivations for using an open source package versus a Web 2.0 tool exist. Part of the attractiveness for open source has to do with the ability to modify the functionality to suit one's needs. There is less dependence upon a central support system. In many cases no such centralized support exists. You cannot say this about Web 2.0 systems. They are often closed source systems and they are typically vendor supported. You utilize them by signing onto a web site and paying a subscription fee but you do not enjoy the same privileges that one does when utilizing an open source package. That makes them more like traditional software producers than open source system providers.

A comparison between offerings that is purely based on price is little more than a 'race to the bottom.' It is idiocy. If all open source had to offer was lower cost than commercial alternatives open source would not exist. It would be called, 'generic software' or 'rip off software.' The reality is that open source is part of the software ecosystem; commercial software companies are reverse-engineering or utilizing open source software/systems for ideas and enhancement possibilities just like open source groups are reinventing commercial systems. Saying that open source is just like low-cost subscription software completely misses the point about open source software.

Companies are not adopting open source in droves due to a lack of a purchasing cycle that is familiar to them (there are not many sales people actively in the business of giving away software), due in part to the quality of the solutions (a problem that could be rectified by actually participating in the open source software development cycle), and due to fears promulgated by private-sector vendors over intellectual property. They are also not considering open source alternatives because publications like Scientific Computing and authors like Michael Elliott obfuscate open source's benefits.

Open source in the laboratory informatics market follows the 'sphere of influence' expansion model. You can follow the use and adoption of systems by following lines of influence that emanate from the originators of the product and those who supported its development. Where the power of their influence fizzles the product's adoption and use fizzles. The project's originators are not going to spend their time and energy trying to get third parties to utilize it; they are not going to submit to an eight to fifteen month sales cycle with a company to promote its adoption over commercial alternatives. It should therefore be no surprise that vendor-supported solutions achieve higher penetration. But that should not lead you to a conclusion that their primary motivation for developing or utilizing such a system was based on cost alone. That is a leap; an irresponsible one.

Where does this leave us? You cannot rely upon publications like Scientific Computing to give you accurate information about open source software or, apparently, some research organizations. If Scientific Computing's editorial staff had insisted upon a decent list of references from submitters Mr. Elliott might have spent a few minutes reading about open source software. It is in your best interests to broaden your information technology news and information resources well beyond what Scientific Computing offers in order to obtain a more balanced overview of what is going on in computing.

Go Back

Citation: What Scientific Computing Is Not. (2014). Retrieved Thu Sep 21 04:29:59 2017, from http://www.limsexpert.com/cgi-bin/bixchange/bixchange.cgi?pom=limsexpert3;iid=readMore;go=1395535781