Hamid A. Campbell
Dr. Adam Johns
ENGCMP 0200
01.13.2009
Are we really willing to take such a risk?
Dr. Adam Johns
ENGCMP 0200
01.13.2009
Are we really willing to take such a risk?
While the acquisition and passage of knowledge is essential to an ideal, functional society, the possession of too much knowledge can have dire, irreversible consequences. This idea is the central theme of Mary Shelley’s 1818 gothic novel, Frankenstein. In the novel, Shelley implicitly argues that new scientific knowledge and rapid technological advancement pose a threat to humankind. Bill Joy’s essay, Why the future doesn’t need us, also explores the idea of limitless knowledge and its possibly dangerous outcomes, particularly in the twenty-first century, as the risk that we take in discovering and applying such novelties that may render man obsolete is the effective endangerment of the human species. Both Shelley and Joy seem to be apprehensive of the fate of mankind as a result of its very own creations, and the abuses and misuses that abundant knowledge enables. While Frankenstein is a work of fiction and Joy’s article purely an expression of concern from a scientific standpoint, both documents support each other by cautioning the reader of the eventual destruction and disasters that science and technology could impose upon humankind.
Joy’s article explores the threats of science using a variety of examples, the strongest of which is his comparison between the displacement of the South American marsupials by the North American placentals over the course of a few thousand years and the possible displacement of mankind by the technology for which it so longingly pines. Inhuman creations of man, such as robots, would be highly intelligent and complex, causing their human creators to develop an unhealthy dependence on them and to entrust these omniscient machines to “make more of their decisions for them,” effectively eliminating the necessity for human knowledge and logic in the future, thus the title of his article (Joy). Joy proposes that machines will compete vigorously with their human counterparts for matter, energy, and space, inevitably driving their inferiors to extinction, as did the North American placentals to the South American marsupials. This idea is also embodied in Frankenstein, in which a creature manufactured by the central character, Victor Frankenstein, begins to wreak havoc upon man, instilling fear in every person with whom he comes into contact and eventually causing the deaths of those close to Frankenstein either directly or indirectly.
The events in Frankenstein certainly shed light on and validate the fears of Joy. The unfortunate chain of events in the novel certainly arouses fears in the reader and should cause one to take Joy’s essay seriously. The novel forces the reader to contemplate the long-term risks of the present science and technology that seems to fascinate us. In his article, Joy takes care to convey his fear that “things [are moving] too fast,” expressing his concerns about the consequences of rapidly expanding technology. He expands this view by focusing on the twenty-first century technologies (genetics, nanotechnology, and robotics) and the ease with which they can be accessed by individuals and small groups, such as terrorists. Such technologies will not require elaborate facilities nor rare materials, but “knowledge alone will enable the use of them” (Joy). If Victor Frankenstein, a scientist looking to advance the field by pioneering the creation of his own being, and a man who possesses a tremendous amount of scientific knowledge, ultimately created something that caused as much damage as it did, can you imagine the destruction that such incredible knowledge can inflict upon mankind if placed in the wrong hands?
Humankind cannot afford to invest its fate in dangerous technologies, technologies that may or may not prove to be our friends in the future. The quest for knowledge is certainly one that should be embarked upon in a cautious manner, as the very knowledge that we seek could eventually be the initiator of our demise. In a world that is rapidly and ever changing, attention must be paid and care must be taken to ensure that the progress that science and technology allows us to make does not pose a fatal threat to humans, primarily progress in robotics, nanotechnology, and other fields that involve the creation of supernatural materials, devices, and beings.
The common misconception of new technologies is that they will all do a world of good all of the time. It has been postulated that nanotechnology and the manipulation of matter at the atomic and subatomic levels could create a “utopian” future of limitlessness, in which materials and goods could be more cost-efficiently produced, and just about any imaginable disease or ailment could be cured. While this is certainly an attractive idea, let us consider the negative ramifications of such advancement. In fields such as nanotechnology and molecular electronics, in which the most basic and elementary particles in nature are altered, it is much easier to create destruction than it is to be productive. The danger is implicit in Murphy’s Law, which broadly states that “anything that can go wrong, will.” Nanotechnological devices can be built to selectively destroy only certain demographics, such as a certain group of people or geographic region, providing this field with clear terrorist and military uses. In science, the slightest mistakes or miscalculations, even by the most brilliant and respected scientists, can sometimes mean a great, irreversible amount of danger and destruction, as evidenced by the monster in Frankenstein. Are we really willing to take such a risk?
Works Cited
Joy, Bill. "Why the future doesn't need us." Wired April 2000.
3 comments:
Evan Kelly
1/15/09
As we discussed in class, the first three paragraphs of your essay, appear somewhat disconnected with the last two. There is too much, summary of both texts in the first three paragraphs, and your prospective and analysis isn't really seen until the fourth paragraph. From there, you clearly display your shared animosity towards the leaps and bounds that we have been making in technology. You should try to incorporate all of these feelings into the entire paper.
I also think that adding your own personal plan of action to the paper would improve the end result, because it would further demonstrate your understanding and belief of what the world needs. It could also provide an arguable stance, that will further attract the attention of your reader.
Furthermore, the summary of the texts that you gave in the first few paragraphs were very wordy, but not necessarily profound. We also mentioned this in class, that you should try to dig deeper into the reading. Rather than mentioning the North American placentals dominance of the South American marsupials twice in one paragraph, you could just mention it once, but then analyze more, possibly giving your own reasoning of why this incident could pertain to humans as well.
I also must disagree with the statement that, "Such technologies will not require elaborate facilities nor rare materials, but "knowledge alone will enable the use of them". You state this in reference to genetics, nanotechnology, and robotics. I am not a technology wizard, so if it is true that this seemingly complex technology can be obtained by terrorists just through knowledge, then going into depth about the ease of this would clarify this statement to me.
Other than these critiques, I enjoyed reading your paper for a variety of reasons. I thought that you did a very good job of incorporating direct quotes from the texts to emphasize your statements. Rather than so much summary, I think that your own ideals backed up with these quotes, would make your analysis stronger.
I personally do not agree with your stance even though most people do for a couple of reasons. One is that the probability that Artificial Intelligence becomes stronger than human intelligence is most likely just as minute as the chance that an Earth ending meteor comes towards us at this point in time. For that reason I would promote the advancement in technology because even though the dangers could become great, the rewards may be just as great or greater, if some unforeseen catastrophe were to occur.
One tiny detail that bugged me is at the beginning of the third paragraph where you use the word certainly in two consecutive sentences at the same place on the page. I think that the first sentence would be more appealing if you cut out the "certainly shed light on and", so that the sentence reads, "The events in Frankenstein validate the fears of Joy." It is a less wordy and also a stronger sentence; in my opinion.
Overall it was a very good paper. I mean it was chosen to analyze in class. I hope that I was helpful.
Hamid A. Campbell
Dr. Adam Johns
ENGCMP 0200
01.19.2009
Are we really willing to take such a risk?
While the acquisition and passage of knowledge is essential to an ideal, functional society, the possession of too much knowledge can have dire, irreversible consequences. This idea is the central theme of Mary Shelley’s 1818 gothic novel, Frankenstein. In the novel, Shelley implicitly argues that new scientific knowledge and rapid technological advancement pose a threat to humankind. Bill Joy’s article, Why the future doesn’t need us, also explores the idea of limitless knowledge and its possibly dangerous outcomes, particularly in the twenty-first century. Both Shelley and Joy seem to be apprehensive of the fate of mankind as a result of its very own creations, and the abuses and misuses that abundant knowledge enables. While Frankenstein is a work of fiction and Joy’s article purely an expression of concern from a scientific standpoint, both documents support each other by cautioning the reader of the eventual destruction and disasters that science and technology could impose upon humankind.
Joy’s article explores the threats of science using a variety of examples, the strongest of which is his comparison between the displacement of the South American marsupials by the North American placentals over the course of a few thousand years and the possible displacement of mankind by the technology for which it so longingly pines. Inhuman creations of man, such as robots, would be highly intelligent and complex, causing their human creators to develop an unhealthy dependence on them and to entrust these omniscient machines to “make more of their decisions for them,” eliminating the necessity for human knowledge and logic in the future (Joy). Joy proposes that these machines will compete vigorously with their human counterparts for energy, space, and other resources, inevitably driving their inferiors to extinction, as did the North American placentals to the South American marsupials. Now consider the destructive potential that such a creature as the monster in Frankenstein possesses. With the intelligence, superhuman strength, gigantic stature, and lightning-fast speed that Victor Frankenstein gave to his creation, it would be only a short amount of time before a species of such creatures would outcompete humans for the resources necessary for existence.
The events in Frankenstein validate the fears of Joy by forcing the reader to contemplate the long-term effects of the present science and technology that seems to fascinate us. In his article, Joy makes it a point to convey his fear that “things [are moving] too fast,” expressing his concerns about the consequences of rapid technological advancement. He expands this view by focusing on the technologies that have bloomed in the twenty-first century (genetics, nanotechnology, and robotics) and the ease with which they can be accessed by individuals and small groups, such as terrorists. Unlike the twentieth-century technologies, these technologies will not require elaborate facilities nor rare materials, but mere “knowledge will enable the use of them” (Joy). If Victor Frankenstein, a man of great knowledge simply looking to advance the field of science, ultimately created something that caused as much damage as it did, can you imagine the destruction that such incredible knowledge can inflict upon mankind if placed in the wrong hands?
Similar to every other science fiction novel I’ve read, the ‘fictional’ elements presented in Frankenstein are largely within the realm of possibility. I recently discussed with my father the research that is being done at Carnegie Mellon University that essentially enables mindreading by using functional MRI scanning in conjunction with computer science. He strangely did not seem surprised, and when I questioned his lack of amazement, he simply said, “The science fiction I was reading about in the 1960s, they’re doing now.” This statement made me think about what the future of technology holds and how far nature will allow us to go before it starts pushing back. I, like Bill Joy, do not believe that humankind can afford to invest its fate in dangerous technologies, technologies that may or may not prove to be our friends in the future. The quest for knowledge is certainly one that should be embarked upon in a cautious manner, as the very knowledge that we seek could eventually be the initiator of our demise. In a world that is rapidly and ever changing, attention must be paid and care must be taken to ensure that the progress that science and technology allows us to make does not pose a fatal threat to humans, primarily progress in robotics, nanotechnology, and other fields that involve the creation of supernatural materials, devices, and beings. I feel that the goals of the future of technology need to be explicitly outlined, and the research and development of technologies that do not comply with this set of goals should be prohibited. Furthermore, I think that a detailed ethics code needs to be agreed upon and established, facilitating the regulation of these new technologies.
The common misconception of new technologies is that they will all do a world of good all of the time. It has been postulated that nanotechnology and the manipulation of matter at the atomic and subatomic levels have the potential to produce a “utopian” future of limitlessness, in which materials and goods could be more cost-efficiently manufactured, and just about any imaginable disease could be cured. While this is certainly an attractive idea, let us consider the negative ramifications of such advancement. In fields such as nanotechnology and molecular electronics, in which the most basic and elementary particles in nature are altered, it is much easier to create destruction than it is to be productive. The danger is implicit in Murphy’s Law, which broadly states that “anything that can go wrong, will.” Nanotechnological devices can be built to selectively destroy only certain demographics, such as a certain group of people or a specific geographic region, providing this field with clear terrorist and military uses. In science, the slightest miscalculations, even by the most brilliant scientists, can sometimes mean an irreversible amount of destruction, as evidenced by the monster in Frankenstein. Are we really willing to take such a risk?
Works Cited
Joy, Bill. "Why the future doesn't need us." Wired April 2000.
Evan - You did a great job expanding on things said in class - there were some good details here. Your job was a tough one, and you did it well.
Anthony - I hope that I was clear in class that I saw quite a few good things in the paper. I did then, and I still do. Moreover, you certainly made some positive changes in the paper when revising - the first several paragraphs seem streamlined to me, for instance, and I was very interested in your discussion of "mind-reading."
That being said, this paper still reads like an articulate and interesting repetition of some of Joy's main points. The edge has been blunted in this version, and your voice is coming through more, but my main question is: "What is Anthony *adding* to Joy's argument here?"
I'd argue that you could have done a more thorough revision around the MRI idea. Rather than repeating Joy's general argument, you could have started out with the MRI/mind-reading, and used that as a compelling example of how Joy and Shelley aren't just blowing hot air - their fears are both valid and contemporary.
This paper is interesting, but the best (and most personal) material isn't emphasized.
Post a Comment