Monday, March 16, 2009

Final Project Proposal

My final project will concern the lack of motivation that technology has the potential to bring in the future. We’ve already discussed the motivational problems that can result from genetic engineering forcing humans to enjoy and/or be good at certain activities. I’m going to look at other motivational problems that are farther into the future but also more serious. Ray Kurzweil’s book The Singularity is Near shows several possibilities for this. For example, he predicts that by the 2030s mind uploading will be possible, enabling humans to copy knowledge from others. Everyone who participates in this sharing of knowledge will obtain every bit of information ever gathered about the world. This will be possible because of an enormous increase in brain speed and power. But this means that there is no incentive to learn anything since everyone will be made equal. Another example is relatively long term. Sometime after around 2060 Kurzweil believes the universe will be completely transformed into a supercomputer. At that point, however, any remaining motivation to improve life will disappear since the universe will truly be at its limit.

Proponents of this view say that this loss of motivation is ok since life will become perfect. Any attempts to keep motivation will slow down improvements that could save peoples’ lives. I believe, however, that steps must be taken to allow humans to have a purpose. A degree of inequality created by a limitation on brain capacity will give people motivation to improve. Also, the percent of the universe saturated by information and computers should be limited to keep potential for the future.

I’ll use The Singularity is Near as well as articles supporting Kurzweil here: http://www.kurzweilai.net/meme/frame.html?main=memelist.html?m=1%23696 and One Half A Manifesto, a paper that shows the dangers of Kurzweil’s future.

1 comment:

Adam Johns said...

I like the focus on Kurzweil. What's unclear to me in this proposal, though, is what you have to say back to him. Are you arguing that his views are wrong, and we won't actually achieve the singularity? Are you arguing that we can achieve the singularity (or perhaps merely that we *may*), but that we should not? That's my impression here - that you think he may be right, but that you oppose him, in which case we'd need to ask the question: how do you *stop* the singularity?

Short version: good *area*, but you need to work on finding a focused argument.