Professor Chen, Professor Huang, Professor Wei-An Chang, distinguished guests. I am very grateful for the invitation to the National Tsing Hua University. I am flattered and thrilled to be able to speak at today's conference. Thank you so much!
This lecture is entitled rather cryptically: "From Private Ink to Public Bytes: a hidden impact of the Internet revolution on Social Sciences and Humanities."
I think that there are two distinct parts within this lengthy title. The first part alludes to the contrast between "ink" and "computer" technologies. The second part invokes the complex nature of the Internet and its impact on what we do. Let us start with the first part of the title. When I use the word "ink", I refer, of course, to a certain technique of recording and disseminating information. And when I refer to "private" ink, I allude to the social status of that informational technology. So, when I talk about handwritten documents, whether written on paper with a pencil, typewriter, or brush; when I talk about a letter or drawing - as long as such material is accessible just to one person, or a small group of friends - I talk about an essentially small scale, private level of communication.
On the other hand, diametrically different arrangements are also possible. For instance, for the last several hundred years (in Europe at least) people had ready access to the technology of print. They had access to a technology which can greatly multiply and proliferate inexpensive flows of information. Print created vast realms of multiple copies of newspapers and books, as well as realms of printed posters and leaflets. There is also an extensive world of journals and journal articles. So, in all those cases we can talk, I think, about "ink" which is deployed in a large scale, "public" fashion. This means that we can talk about information which reaches out to an audience situated beyond the confines of face-to-face interaction, of a gathering which is assembled in a study, lecture hall, or a classroom.
Now, having made this crucial distinction (i.e. "private" vs. "public ink"), our next conceptual step is very easy. We can immediately perceive that whenever one uses a standalone computer or a PDA device, such a person can be said to engage in the use of information in the form of digital signals, and do so in a relatively small, controllable and therefore private sphere. However, as soon as such a device is networked, the potential for a public exchange of bytes is immediately created, with all its manyfold options for information storage, indexing, retrieval, and dissemination across the entire network.
So, to an unskilled or overenthusiastic person, the entire history of a culture or of a group of civilisations, simply begs to be divided into neatly parcelled-out technological stages. Thus our societies can be said to have moved from slow, inefficient past times - ones which were denoted by the circulation of information in the shape of a private ink - into more recent times, those which are marked by the reign of public ink. Moreover, one could propose a further evolutionary step. For example, one could postulate that in the last thirty years or so we have made an new transition, one from the realm of isolated and restricted private bytes into the universe of freely available and freely moving public bytes.
All such typologies and periodizations are seductively elegant and simple. They hint at the never ending march of technological progress, and they offer us the prospect of an enthusiastic and fascinating future. However, closer inspection reveals that such a future is fraught with unforeseen complications.
The Internet is widely regarded as the largest and most important social and cognitive development in the last 500 years, that is since the time when Johannes Gutenberg started printing books in Europe (Harnad 1991, Dewar 1998). In Europe at the time when Gutenberg commenced his operations in 1455, there existed approximately 30,000 books in the form of handwritten manuscripts, scrolls and codices. Fifty years later, that number has grown to approximately 9 million printed volumes. Even in purely numeric terms these changes represent a truly dramatic and remarkable social and intellectual development: a 30,000% growth in the number of publicly accessible documents over the space of 50 years, or 600% growth per annum.
We are aware that a parallel revolution is happening today. It started on the last day of October 1969, when two hitherto individual computers were connected to each for the first time. Less than 33 years later, in early 2001, world-wide there were more than 147.3 million networked computers (Internet Software Consortium 2002) with many thousands of new machines being added to the global archipelago on a daily basis, on hourly basis, almost every minute. Moreover, additional developments are also afoot. The 1969 communication link established between these first two computers brought about online interaction of a handful, maybe of a dozen or so individuals. Since then the numbers of people who interact with each other online grew exponentially. Less than 33 years later, in February 2002, there were over 544.2 million people who were regular online users the Net (Nua 2002).
So, in the face of this most energetic technical revolution ever recorded we are led to ask the inevitable question "what are the unanticipated consequences of our use of this large scale global network for the exchange of digital information?" As all present in this seminar room know, the Internet was originally created only to promote efficiency in the utilisation of costly computers. Other uses came unexpectedly. In the earliest days of the Net the main reason for which the system was proposed and built, initially in the form of the ARPANET, was to link a series of high-powered computers, so that a researcher, say, in the state of Utah in the USA could connect to a more efficient computer in California, and a researcher who had special computing needs in Michigan could easily use such computers in Connecticut. It was only later that once the elementary backbone was established that other developments - such as an introduction of email, or the placement of electronic documents on the Net so that they could enjoy permanent and public audiences - took place. These additional developments were not deliberately planned, they were created more or less spontaneously and in the form of a series of instalments.
As we all know, a lot of other things have happened since then. So, let's look at the unanticipated, unintended consequences of all those inventions, especially as they constitute such a dominant part of our recent societal and economic history. Moreover, unintended consequences are one of the most promising areas for social sciences research (Popper 1969).
For instance, if we reflect again on the introduction of the European printing press, we note that it originally was meant to produce copies of the sacred books of Christianity in a manner which was quicker, more accurate and less expensive than handwriting. But in addition to the realisation of those purely utilitarian outcomes we also find that print eventually has brought about the consolidation and standardization of distinct national languages in Europe (Eisenstein 1983). Language groups which happened to have easy access to printing presses were able to thrive and consolidate, because they could refer to their recorded daily utterances, and exchanges of thought between various people. And once that became possible, they could arrive at a certain standard vocabulary, and the rules of grammar. On the other hand, those separate linguistic groups which did not have the print's power of turning privately created information into a public property, such groups tended to dissolve into dialects embedded within the more dominant cultures.
Another example of an unintended consequence of a supposedly neutral technical invention are the events surrounding the invention of railways. In either 1823 or1825, a young English engineer called Stephenson, started experimenting with a mechanical cart powered by a steam engine and rolling on a pair of metal tracks. So, initially all we see is a purely technical invention. However, barely fifteen years later, in 1840, Uniform London time (GMT) starts getting adopted across the whole of England. The GMT time becomes the single and standard time for all the places which are spanned by an ever growing railway network. This was a momentous cultural and epistemological transformation indeed. Until the invention of railways, every town and every village had its clocks and timepieces set up according to the local occurrence of the sun's zenith. However, the spread of railways relentlessly unified and synchronised all the differing clocks. Naturally, there was a lot of popular resistance and foot-dragging to such synchronisation. And yet, ultimately, all those widely scattered localities gave up their concrete, local astronomical precision for the sake of the accurate coordination of distant railway schedules. So, we can see a process which first started in England, and which was soon transposed to mainland Europe. Eventually, other continents followed suit. United States and Canada adopted standard railway time in 1883. Finally, in 1884, the whole world became unified by the means of 25 hourly time-zones, while the all-important (to global maritime communication) International Date Line was also defined and codified. In short, new technology, in order to be fully successful demanded uniformity of behaviours of all those thousands and millions of people who get exposed to it. Because of this need for a world-wide uniformity of actions, time-keeping conventions which for millennia were treated as local, negotiable and private, became inescapably public, codified and global.
So, today I am going to ask a question about the unintended consequences of the Internet. More specifically, I am going to ask the question: "does the Internet impact in any fashion on the way we tend to think about scholarship, about scientific research, and about knowledge itself?" Naturally, it is not just a question of whether academic life has become more (or less) complicated due to the advent of email which enables us to interact with our distant colleagues who work in London, Tokyo, or in Johannesburg. Nor is it a mere question of gauging the effectiveness of such time-saving (or time consuming) devices as online catalogues of the University of Hawaii, which for the last ten years or so can be queried remotely, that is without the need for a researcher to travel physically to that island. Yes, they are superbly useful, very important and almost miraculous tools and facilities. However, today I would like to ask a more fundamental question. "Does the Internet change the way we think about the practice of being a researcher?" It is a question very seldom asked, and answered even more rarely.
If we reflect on what science is, if we reflect on the structure of that tradition of thought which implies the use of precise and replicable methods of research, the tradition which implies the use of logic in bringing together observations, and - finally - the tradition which implies the use of logic in the interpretation of those observations, it is possible to discern a major intellectual schism within this special tradition. Basically, at this stage in our evolution, it appears that there are two co-existing philosophies of knowledge (e.g. Magee 1973:18-22). One of them is a "critical knowledge" approach, while the second can be loosely labelled as "cumulative knowledge."
The starting point for "critical knowledge" is the realisation that there exists a problem worthy of our attention. Such a problem exists simply because what we know about the world does not match the way the world is observed to behaves. So, that realisation leads to an intellectual tension, a "mental itch". If this occurs, the researcher becomes infused with a burning question: how does the world "really" work?, what is this thing in our field of inquiry that eludes understanding? The concept of "critical knowledge" is intimately linked to the name of Austrian-born philosopher Karl Raimond Popper. Popper was the first to remark that the road to critical knowledge starts with an initial formulation of the research problem. The next stage on that road is a temporary theory, or an explanation of the intriguing phenomenon. The act of construction of the temporary explanation inevitably takes us to the next and third step. This particular stage of our road to discovery involves the critical discussion of the problem, as well as of our observations so far and our tentative explanatory account of them. Critical disussion also involves ample consideration of alternative views of the problem at hand and the existing explanations. Following such a critical discussion is step four, namely the revision and re-definition of the research problem. In course of that step we re-state and re-evaluate, as well as we can, the initial difficulty which launched us on the path of our investigations. So, according to the tradition of critical knowledge we are always dealing with a growing body of well articulated and deeply understood problems, and their "intellectual ecologies". This is so, because scientific problems do not occur by themselves. They occur in the exacting context of other scientific problems, as well as in the context of all previous efforts to resolve them (Popper 1994:101-102).
There also a twin, and greatly dissimilar, school of thought. Practitioners of the "cumulative knowledge" approach start their research with observations. Of course, in order to observe something, one needs to have a good question in one's head, a question prompted by prior hypothesis or theory, but clearly such a question does not seem to be accorded here the key role it enjoys within the framework of "critical knowledge." Instead, observations - precise, trustworthy, thoughtful and ample observations - seem to take precedence. Sometimes systematic observations are embarked on because we need to address an urgent practical issue, sometimes they entered as a part of one's life-long pursuit of having expert understanding of the structure and behaviour of a given segment of reality. Sometimes, they are embarked on (as is the case with electronic "data mining", or archaeological excavations) as a part of a grand fishing expedition, in course of which sources of information are systematically trawled in search for some meaningful though unpredictable correlation between a range of variables. One way or the other, carefully conducted observations lead to an accumulation of data. The volumes of collected data may vary from context to context. Sometimes one has only a few points on a scattergram, sometimes we may be lucky and have many data points. Demographers, for example, tend to deal with large volumes of numeric data. On the other hand, historians and archaeologists have to rely on less numerous, or even scanty evidence. Following the observation and data collection phases, there is step number three - the analysis of data. When we look in depth at our materials we soon realise that certain patterns, certain regularities, certain trends can be discerned. Inevitably, if we look at our materials long enough, sooner or later, we are able to find some aspect of our data that we can report and comment on. So, on the basis of such analysis, we move to stage number four, we start building models - simplified and generalised representations of reality. Some models are mathematical and involve the design of an equation or a matrix, or a computer simulation. Other models can be graphic such as charts and plans and maps. Still others heavily rely on verbal formulations, whether in the form of chronological narratives, or in the form of systematic accounts and descriptions. After that is accomplished, we can return to the step number one. More observations are carried out, and more data are collected and analysed so that those representations, their models, can be further refined, reordered and strengthened, and - if we are successful - converted into a permanent theory, a generalized account of the studied fragment of reality. This all means that according to that "cumulative" school of thought human knowledge could be defined as a growing body of replicable methods and general laws, and - of course - of dependable, trustworthy data.
As we can see there is a strong contrast between these two epistemologies. This is further illustrated by the table below.
CRITICAL knowledge | CUMULATIVE knowledge |
seeks correspondence with the real nature of facts | seeks certainty in the face of complexities and errors |
favours inspirational problem-solving | favours systematic puzzle-solving |
formulates temporary "conjectures" | formulates permanent "laws" |
emphasis on imaginative explanation | emphasis on precise and complete description |
drives increasingly closer to the "truth" | becomes increasingly more accurate |
favours revolutionary transformations of what we know | favours incremental corrections to what we know |
Naturally, other major differences can also be found. As we know, life is unpredictable and what we find out about it is never good enough. So, what happens when we are confronted with a new evidence? What happens if a particular body of scientific opinions finds itself in some ways inadequate? Well, the difference in responses and strategies of the two distinguished intellectual strategies is quite dramatic.
The "critical knowledge", when it find itself inadequate, strives to radically reform its inner logic. It decisively supplants all earlier conceptualisations and meta-conceptualisation, with yet another brand-new and temporary meta-meta-conceptualisation. In a way the whole intellectual process resembles a a sequence in which a caterpillar is being hatched from an egg, a chrysalis being formed by a caterpillar, and a butterfly being born from the pupa.
In similar challenging circumstances the "cumulative knowledge" approach resembles (and here no unfavourable judgement will be implied, as I am looking only for a handy metaphor) the behaviour of a one cell organism, an amoeba, at times when it is prodded by an electrical discharge, or maybe exposed to some acid. In those cases the body of cumulative knowledge usually changes its "shape" (its area of investigations), while simultaneously trying to preserve as much as possible from its body of replicable methods and general laws, and - of course - the body of its dependable, trustworthy data (Magee 1973:24-25). Within the tradition of cumulative knowledge erroneous theories are seldom formally overthrown, instead they get silently shunned in favour of other more promising (at a given instance) accounts of reality. So, like critical knowledge, the cumulative knowledge is also a well developed and well seasoned adaptive system.
All in all, critical knowledge appears to be more dramatic, and it is more akin to the "sudden enlightenment" schools of Zen (so to speak), while it goes about the brutal business of protecting its objective engagement with the world. On the other hand, cumulative knowledge, seems to behave more like the "gradual enlightenment" branches of Zen, while it undergoes silent changes to protect its hard-earned, trustworthy content.
It is time that we ask ourselves here: how do these two approaches to knowledge relate to each other? A number of answers seems to be possible.
In one sense we could say that they represent two distinct and consecutive stages in the history of science. The first stage, which started around 1620, was initiated by the work of Francis Bacon, an English philosopher. Bacon's seminal book "Novum Organum" (= the New Tool, the New Range of Equipment) gave rise to Western experimental science, and to its distinctively "cumulative" philosophy of knowledge. That epistemological tradition has flourished in the 19th and early 20th century. We hold as evidence those tremendous achievements in chemistry, physics, geology, and zoology. We also know about the parallel discoveries in sociology, (well, proto-sociology), and about great leaps in history, classical studies, and linguistics. That period in the history of science was brought, however, to an end in 1935 when a young scholar, an Austrian logician, Karl Popper, published his ground-breaking book, "Die Logik der Forschung". Popper's work was subsequently published in English, in 1959, as the "Logic of Scientific Discovery." So, in one sense, we can conclude that the cumulative school of thought which reigned for about 300 years (from 1620 to 1935) was eventually superseded by the newer tradition, that of critical knowledge, and that the more recent epistemology has been our intellectual guide for the last 70 years or so.
However, two other interpretations are also valid. Some forty years ago, Thomas Kuhn, an American physicist and philosopher of science, wrote a book on the transformations that physics and astronomy experienced during times of major scientific revolutions. In that book Kuhn (1962) has argued that occasional periods of dramatic, fast-paced scientific breakthroughs (i.e. times when one set of theoretical frameworks is suddenly replaced with another one) are separated by lengthy periods of relatively stable, normal, puzzle-solving science (Kuhn 1970:4-11) during which scientific research progresses routinely and cumulatively. So, one could conclude that Kuhn describes the real-life practice of academics, their daily behaviour; while Popper charts an idealised etiquette, the preferred but hardly realistic norm for all scholarly activities (Williams 1970:50). Or perhaps, as some scholars suggest (e.g. Watkins 1970:32-33, Magee 1973:41), we might wish to conclude that cumulative and critical types of knowledge constitute in fact two inseparable, because indispensable, complementary aspects of the same single continuous intellectual process. This would mean that in order to be effective researchers, we need to take an equal advantage of both epistemologies.
Of all three interpretations, it is the last one that appears to be the most convincing. It is an interpretation which proposes the intimate co-existence and fruitful co-operation of great prudence and great vision.
Yet, one of the parties in this philosophical relationship, the "critical approach" to research, is clearly a remarkable one. The reasons are several. Due to "critical approach", for the first time in the history of knowledge mistakes and errors cease to be our enemies. Instead, and - unexpectedly - they turn into our friends. This is so because we can thoroughly learn from them. Regular and cheerful encounters with our blunders and shortcomings are, actually, indispensable to the success of our long-term work. Once our mistakes as recognised and well analysed they help us to do our subsequent work better. So, the emergence of critical knowledge is a truly dramatic development in the history of science. We no longer need to attach ourselves to a particular point of view. Instead, we start attaching ourselves to truth. At any given moment, a particular point of view serves only as a transient approximation of that much prized "correspondence to the facts" (Popper 1994:111).
In addition, the critical knowledge brings us selflessness as it encourages energetic public discussions of our intermediate solutions to the recognised problems of our work. And also, it gives a wonderful sense of tradition and grounding in the past...
Yes, there is a paradox: the framework of critical knowledge is quite happy to swiftly replace one point of view with another (providing that the resultant informational content is actually increased, and that it can be objectively tested for its validity). Yet it also strongly promotes tradition and continuity of our intellectual endeavours. Progress in every research discipline is strongly predicated upon the uninterrupted collective memory of all the past problems which have been grappled with, successfully and otherwise. In other words, at any point in our research and thinking we are absolutely dependent on the full intellectual background of the problem currently at hand, and its entire history. This places an obligation on scholars to always need to keep complete and unadulterated record of all previous theoretical meanderings, temporary solutions, and temporary thoughts. Otherwise, it becomes very hard to make real progress.
So, now we can return to our inquiry concerning the Internet: How do our daily uses of the Internet fit the above picture? How do they impact on our daily epistemological conduct as social scientists?
The success of the Internet as a global network for the production and exchange of electronic information is self-evident. In late May 2002 there were over 222,000 LISTSERV lists (L-Soft International 2002); 100,000 e-mail newsletters emanating from 70,000 individual publishers (Topica Email Publisher 2002); and over 350,000 USENET groups (Google 2002). Also, there were over 38.2 million Web servers (Zakon 2002) carrying a total of 10 billion electronic documents, or the hypertext equivalent of the the entire holdings of the US Library of Congress, the largest library in the world. And all those resources are readily accessible anywhere and anytime by anyone with a connection to the Internet.
Interestingly, that growth and proliferation of Internet-based information is not an intentional outcome of the activities of any single organisation, or a group of organisations working with a common plan. This gigantic development happened perfectly spontaneously, and virtually against the laws of any ordinary logic. However, some regularities crop-up. If we look at the years between 1969 and today, we can discern some five or six common elements; without which it would be unlikely that the Internet would be as ubiquitous and usable as it is today.
First of all, there has been a philosophy of "open source" of the programs, of the software. Both the algorithms and their implementation in the form of a code written in a particular programming language were put in full public view. Secondly, we can observe the reign of a "bazaar"-style and of a piecemeal approach to programming and engineering problems. This "bazaar" approach to design was found to be a far more effective philosophy than the use of structured, methodical approaches. The latter strategy is sometimes compared to a "building of a cathedral." The "bazaar" approach (Gabriel 1996, Raymond 1997-2000, Cavalier 1997-1998, Ditlea 1999) recommends that a minor, but promising product is released, and that this release is followed by a series of quickly-paced revisions and adjustments. Moreover, any problems with that product are publicly communicated and descriptions of those problems are recorded, i.e. documented and archived for future public access. In addition, as many past versions as possible of solutions to various problems are permanently stored and kept in full public view. The third ingredient of the success is the tight, frequent, "moderated" and intensive communication loops which enable people to work together at the great speeds. Fourthly, participants of all those public discussions dealing with ways to improve an aspect of a given software tended to be rewarded for the substance of their contributions and not for the clever phrasing of their communications. To the contrary, what matters is one's ability to make a new, clear, substantive point. Certainly, it not a question of eloquence, or the public relations' value of oneís email. Finally, there is the fifth point to note: the ultimate success or failure of a current iteration of a product is to be judged in terms of proof of formal integrity of the proposed solution; in terms of correct performance under all circumstances, including adverse circumstances, as well as in terms of the solution's user-friendliness.
This innovative five-fold value system underpinning the spontaneous emergence of a global networking infrastructure, its hardware, applications and formal protocols was found to work perfectly in almost all situations. Its fruits are many. One of its results is the immense popularity and widespread use of the UNIX operating system. Email constitutes another example (Hafner & Lyon 1996:191-206). Still another example is offered by the victory of TCP/IP over the very thoughtful, very logical, very promising Open Systems Interconnection (OSI) Protocol, a specification which- in addition - had the administrative backing of a number of standards bodies (Hafner & Lyon 1996:252). The "bazaar" approach also underpins the process in course of which Gopher has subsumed in 1991 navigational functions of FTP Archie software, only - in turn - to have its incipient hypertext capabilities wholly displaced, around 1994, by the newly established WWW (Ciolek 1999). And, of course, there is the case of the exponential growth of WWW technology itself (Shirky 1998), and, more recently, of the runaway success of the LINUX operating system.
All the above are sufficient to lead us to a general conclusion: the dominant strategies which were used in the creation of Internet infrastructures from 1969 onwards till now, from the times of ARPANET and NSFnet till today, strongly resemble the epistemology of "critical knowledge."
But this is not the end of story. Further and complementary details can be seen to emerge.
The e-publishers' key tool of activity was (and continues to be) the WWW. Why is it so? Well, the WWW conferred three magnificent advantages on anyone interested in online publishing (Ciolek 1999). First of all, the WWW's is immensely attractive because of the great freedom and richness with which information resources can be built with the aid of the Web's hyperlinks. Secondly, the Web provides us with a very simple method for structuring and formatting text, as well as for combining it with files containing images and sounds. Thirdly, and possibly most importantly, the Web is for the first technology in the history of humanity which gives an average person a complete, and user-friendly tool-kit for wide-area (and therefore mass audience) electronic self-publishing. This vital objective was not achievable at all with any of the earlier electronic tools such as the anonymous FTP archives, Wide-Area Information Servers (WAIS), or Gophers. The Web forms the turning point in the history of people's relationship with the Internet. Until the advent of the Web all electronic publishing projects, all activities - however large or small - always had to be mediated and authorised by technicians in charge of a machine with FTP, WAIS or Gopher servers. However, the arrival of the WWW has completely change that situation. The arrival of the Web meant that literally anybody with an account on any machine with WWW server software, was able to release his or her electronic information for world-wide public access and inspection at any time and at any volume, and what is more, was able to do so freely, easily and speedily
So, the WWW was destined to become the most popular and most heavily used online publishing tool in all parts of the world. And since about 1994 it has been used very extensively to move information from the realm of private ink to that of public bytes. However, this large scale process took place in a very idiosyncratic manner.
The idiosyncrasy is both real and unexpected. The open source approach, the open document philosophy (which we have seen to be embraced so widely and so earnestly among the infrastructure builders) has too found its followers amongst the burgeoning electronic publishers. But that highly effective and disciplined value system has been deployed with a special twist. The implementation of that openness and full visibility contains three large-scale surprises.
Firstly, this free online documents policy definitely favours the publication of online commentaries and discussions over the online provision of full evidence on which these conclusions are resting. In other words, information about social sciences data collection techniques, and data analyses are placed in full public view. Moreover, data themselves tend to be published in a highly selected and aggregated manner.
I have been working online for many years. However, during all these years, I have not encountered too many people who would publish their online works and, at the same time, hyperlink them with companion electronic documents displaying all the cited data, especially data listed at the level at which they were originally collected. This is rather strange situation, to say the least. Despite the ubiquity of electronic storage, despite its real potential for creation of easy world-wide access to such information and despite the remarkable cheapness of such electronic data storage, there is a definite lack of interest on the part of social sciences and humanities scholars in releasing to the public the complete contents of their data sets. Instead, the prevailing norm is that only summary cross-tabulations and aggregated listings of select materials tend to be offered online. This would suggest that the old logic of a paper publication, the old logic of a paper device in which the size of the informational "real-estate" (measured in square inches and the number of pages) is always severely limited, and where authors are forced to summarise, abstract and compact their evidence - has been carried over into the diametrically dissimilar environment of the electronic world.
The second notable development is that despite very promising, albeit unruly, beginnings in the use of the Usenet groups and early email lists, the majority of social sciences online public and "moderated" communication loops increasingly often is used for reporting, not for the more reflective and discursive exchanges.
The trend away from the discursive mode of public email exchanges appears to be gaining the momentum. I can see it clearly in the exchanges posted to the ten or so scholarly mailing lists I subscribe to. During the seven or eight years I have been using those lists, there is an increasingly diminishing number of any critical discussions of anybody's work. There seems to be a growing sentiment among those involved that an attempt of such discussion is to be regarded as an uncivilised and hostile act. Any discussion, or analytical comment voiced in a large-scale electronic forum tends to be perceived as an unwarranted attack 'ad hominem', even if the only thing which ever manages to get discussed, dissected and criticised is that person's work. There is a growing tacit consensus that initiators of such public discussions are not "good academic colleagues" at all, ones who should be ostracised. So, in consequence, any substantive comments on a piece of electronic work tend to be communicated to the interested parties out of public sight, that is via personal email, or during a small scale face-to-face meeting. In other words, the tradition of an energetic public critique of one's work (so happily engaged in by software engineers and other developers of the Net) is now energetically shunned by those who use the Internet for the purposes of electronic publication.
The third major tendency which can be observed among electronic publishers is their abandonment of the original principles of the "bazaar" (i.e. piecemeal re-structuring) approach. These days, when a major (and promising) electronic paper is launched, following such a publication there is a series of minor adjustments to the document. However, such corrections are implemented in a very special, and startling way. The person who receives useful feedback from his colleagues returns to his/her online document, corrects the document in light of the received remarks, and then.... republishes such an amended work at its original online address! In other words, the freshly enhanced document is almost invariably used to obliterate its former and less adequate iteration. And this, unfortunately, is a very common practice. I have been a user of the WWW for nearly 11 years, and yet among the tens of thousands of online documents I happened to visit I have seen only once or twice a disciplined sequence of versions 1.0, 1.1, 1.2, etc. etc. of the same scholarly work. Simply, there is no custom and no expectation either that a full sequence of major embryological stages of given idea or methodology will be released on the Net. Instead, online audiences continue to encounter always the latest and enhanced incarnation of a given research paper, albeit with all traces of its less perfect past perfectly removed and concealed. Obviously we seem to be oblivious to the fact that such a practice annihilates the history of any errors, of any logical blunders, of any problems present in work in question. But that loss is simply rendered invisible, whereas the author's gains are immediately displayed online for all to admire: for his research and arguments present themselves being invariably correct and eternally true.
But there is a dear price to such a practice.
If corrected versions of the documents are used to overwrite (replace, destroy) their previous iterations, and if such a practice is widespread and un-acknowledged then the phenomenon of online self-publishing (which has been made possible by the WWW technology) has a number of deep and long-term consequences. First of all, scholars' practical reactions to uncovered errors in their works are firmly relegated to the private sphere of their activities. Secondly, onlookers are excluded from the process and they do not have a chance to learn from the known and most recently identified mistakes. Thirdly, despite the normative exhortations of Popper, errors start being regarded, again, as something shameful and unmentionable. They are turned again into something that needs to be swiftly rectified, while simultaneously the act of their correction is re-structured as a private and embarrassing activity, an activity which equally swiftly needs to be covered up and consigned to an orwellian "memory-hole." And all these developments can only mean one thing:
Therefore, we are forced to conclude that our initial cheerful formula describing the informational transformations of our cultures, a formula which highlights the swift and triumphant transition from "private ink => public bytes" carries, from the point of view of our epistemological analyses, a very mixed blessing.
What we actually find, is that during the first 25 years of the Internet's history (1969-1994) the manner in which the Arpanet/Internet was built has actively and overtly promoted the philosophy of "critical knowledge". However, since 1994, that is since the mass adoption of the WWW as the dominant online technology, the manner the Internet is used actively yet silently reinforces the philosophy of "cumulative knowledge."
Perhaps, during the months to come, and hopefully starting with today's conference at the National Tsing Hua University, we will be able to locate and identify other unintended consequences of our private and public behaviours in cyberspace.
And, perhaps, if we are diligent enough and fortunate, maybe we will be able to identify the consequences of our ways of coping with those discoveries as well. Thank you!