Tuesday, March 29, 2016

Hiring A Hacker

Did you know you can hire someone to hack something for you? Sounds pretty sweet if you ask me. I had no idea that hackerslist.com was a thing, and it seems very interesting. For those of you who do not know of this site, it is basically a freelance website for hacking projects. If there is something you need hacked, post on the site and a hacker will respond and do the job for a price. This can benefit many people who are trying to do simple things such as downloading all of their emails but not knowing how to in the quickest way possible.

Most of the time when you hear the word "hacker" it is linked to a negative connotation. This article points out that not all hacking is a bad thing. King explains that "Historically, a hacker is a technologist who has a particular sort of adventuresome and enthusiastic orientation to the creation and improvement of technology." These hackers are on a mission to find a new a better way to use technology in a constructive way. On the other hand, there are disruptive hackers who, like the Target data breach incident, are hacking to make money momentarily and only benefit themselves while hurting many others in the process.

Technology is definitely a factor into why hackers are motivated to do what they do. King explains an example of an iPhone that does not have the right language software available, so hackers hack the phone to allow for the language of their choice. He also brings up jailbreaking an iPhone, which was the exact example I was thinking of when reading the topic of technology motivating hacking. People jailbreak their iPhones and iPods to make it more customizable. The hacking alters the technology to not necessarily make it better, but to make it fit to their liking.

Overall, I'm open to hacking technology as long as it is constructive and not destructive.









Source: http://digitalethics.org/essays/good-reasons-hire-hacker/

Tuesday, March 22, 2016

LOLing at tragedy

Internet trolls are a strange thing that has come to life with the different social platforms. I think the first time I realized trolls were a thing was on YouTube. It's really weird and also interesting at the same time that there becomes a community of these trolls that will say anything opposite of the general opinion to get a reaction or to piss people off. It's also hard to not give into the trolls when they are trolling. I often find myself reading comments on YouTube and become extremely offended and want to defend the person's video or point of view that I know is correct. This is exactly what the trolls want and it is too easy for them to get those angry reactions.

Previously to reading this article I had no ida that RIP trolling was a thing. I also was unaware that there were trolls that continually created the same identity, and befriended other trolls. This is a horrible phenomenon that shouldn't be happening. I understand that people troll and it is inevitable, but for a community of trolls to spend their time and energy posting horrible things to these Facebook pages is going too far. They most definitely are crossing the line by making separate Facebook pages just to make fun of the person's RIP page. The fact that these trolls are posting dead baby pictures on a recently deceits children's RIP pages is disturbing as well.

I don't think that there is a good way of controlling these trolls or even to get rid of them, but I do think that there is a way to have trolling not impact people as much. For this to happen, we need to recognize that people are trolls. If we see a comment that is extremely offending or just plain horrible, report it, move on, or even comment, "don't feed the trolls." If more people aren't reacting to these trolls, trolls won't have any motive to keep posting on pages like these. Obviously trolls aren't going away but we can help prevent the spread of the trolling on Facebook pages and other social mediums.

Source: http://firstmonday.org/ojs/index.php/fm/article/view/3168/3115

Tuesday, March 1, 2016

The Doomsday Invention

Bostrom brings a lot of very solid points about artificial intelligence in this article that I hadn't thought about before. The example he used in the video demonstration about if we will engineer our own extinction with the white balls was so simple yet got his point across perfectly. We have been inventing these things like nuclear weapons and robots and so far nothing extremely bad has happened to kill off human civilization, but what happens when we do take out this so called black ball and invent something that destroys us all? It seems like robots are the next step for this possibility happening. People want to create these robots that are just like humans,  but why? Like I discussed in my last blog post I am not in favor of this. Yes create robots that are useful for efficiency and precise work but do not create them to be like us.

A quote that scares me is, "In the history of computer science, no programmer has created code that can substantially improve itself." Since this hasn't happened before, we are unsure of the consequences that go along with this. I believe that once this happens A.I. will be smarter than humans and pretty much take over the world. 


Another quote that scared me is, “The brain of the village idiot and the brain of a scientific genius are almost identical. So we might very well see relatively slow and incremental progress that doesn’t really raise any alarm bells until we are just one step away from something that is radically super intelligent.” We keep inventing new A.I. and advance more and more into creating human like robots, but once we reach a point where these robots are extremely intelligent we might have gone too far and will not be able to un-invent these things.

It is strange because I recently saw a commercial for a self driving car and I am so apposed to the idea yet my roommate said to me, "That is the future. Someday that's all people will have and nobody will be driving." This scared me because people are willingly allowing robots be responsible for our safety. 


By the looks of it, these things are not going away and scientists are brushing aside the ethical dilemmas and possibilities of horrible outcomes. “There is just more to be said about the risk—and maybe more use in describing the pitfalls, so we know how to steer around them—than spending time now figuring out the details of how we are going to furnish the great palace a thousand years from now.” Instead of figuring out the downfalls of these artificial intelligences, scientists are figuring ways around them in order to create a more intelligent robot. At some point there needs to be a line to be drawn, and it seems like as of now, we may be crossing it.


Source: http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom