Tuesday, March 1, 2016

The Doomsday Invention

Bostrom brings a lot of very solid points about artificial intelligence in this article that I hadn't thought about before. The example he used in the video demonstration about if we will engineer our own extinction with the white balls was so simple yet got his point across perfectly. We have been inventing these things like nuclear weapons and robots and so far nothing extremely bad has happened to kill off human civilization, but what happens when we do take out this so called black ball and invent something that destroys us all? It seems like robots are the next step for this possibility happening. People want to create these robots that are just like humans,  but why? Like I discussed in my last blog post I am not in favor of this. Yes create robots that are useful for efficiency and precise work but do not create them to be like us.

A quote that scares me is, "In the history of computer science, no programmer has created code that can substantially improve itself." Since this hasn't happened before, we are unsure of the consequences that go along with this. I believe that once this happens A.I. will be smarter than humans and pretty much take over the world. 


Another quote that scared me is, “The brain of the village idiot and the brain of a scientific genius are almost identical. So we might very well see relatively slow and incremental progress that doesn’t really raise any alarm bells until we are just one step away from something that is radically super intelligent.” We keep inventing new A.I. and advance more and more into creating human like robots, but once we reach a point where these robots are extremely intelligent we might have gone too far and will not be able to un-invent these things.

It is strange because I recently saw a commercial for a self driving car and I am so apposed to the idea yet my roommate said to me, "That is the future. Someday that's all people will have and nobody will be driving." This scared me because people are willingly allowing robots be responsible for our safety. 


By the looks of it, these things are not going away and scientists are brushing aside the ethical dilemmas and possibilities of horrible outcomes. “There is just more to be said about the risk—and maybe more use in describing the pitfalls, so we know how to steer around them—than spending time now figuring out the details of how we are going to furnish the great palace a thousand years from now.” Instead of figuring out the downfalls of these artificial intelligences, scientists are figuring ways around them in order to create a more intelligent robot. At some point there needs to be a line to be drawn, and it seems like as of now, we may be crossing it.


Source: http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom

No comments:

Post a Comment