« The Great HackerNews Book-a-thon| Main | Angry Yet? »
Towards Ethical Hacking
Over the past few years, Paul Graham, Eliezer Yudkowsky, a few other writers and myself have pushed the idea that while technology is a great and wondrous thing, it also has some severe side-effects that we are not considering. Many millions of people spend their time everyday with technology and later on regret it. Yes they were doing things they found pleasing at the time, but looking back it's not something they would have chosen to spend so much time on.
If you think about it, the video game in some ways has become the 2-martini lunch of the 2010s. It used to be, 70 or 80 years ago, that drinking during lunch wasn't as unacceptable as it is today. Watch a few old movies, and somebody has a liquor bottle around somewhere in an office. Of course, that still happens today, and there's nothing wrong with it -- but nowadays we realize that drinking during work might not be the greatest thing in the world to do. Afterwards, perhaps, but not in the middle of the day.
Back then, however, folks felt like they "deserved it".
I got an email from a video game player who told me basically "so what if I spend 6 hours every evening playing video games? I work hard all day. I deserve it."
As the French say, plus ça change, plus c'est la même chose -- the more things change, the more they remain the same.
Instead of getting into particular people or situations, is there a way to come up with some general system of ethics for hackers? Is there some rulebook we can use to determine whether or not we're actually providing real value in the world with what we do?
As it turns out, although no system of morality has dealt with this question directly -- after all, we're the first bunch of schmucks to make it this far -- there were some really smart guys in history that may help us out here.
Immanuel Kant was a pretty sharp guy who lived in the 1700s in what now is Russia. Kant struggled with the ideas of good and bad. He grew up in the middle of a great battle between the empiricists and the rationalists. The empiricists believe that you made decisions, moral or otherwise, based on your experiences in life. Grow up a Buddhist, you take on the morals of Buddhism. Grow up a Catholic, you take on the morals of Catholicism. What we know in life is simply a result of our experiences. In science, we only know something because we can have the experience of verifying it in a laboratory (or the experience of listening to a story about how it was verified) At the end of the day we couldn't know something or believe something if it were not for the experiences we've had in life.
The rationalists felt otherwise. They felt that reason alone was the key to knowledge, moral or otherwise. Sure, if you grew up a Catholic you might have the morals of Catholicism, but that's only because you have learned the rules and precepts of Catholicism, and you have learned how to apply these to your life. Things were not good or bad based on your particular experience -- there are universal rules that we can leverage. We don't have to test every water molecule in the world to know how they act -- we simply learn the rules of how water molecules act and then we can apply them generally. Likewise, we can learn general rules of good and bad and learn to apply those rules.
These two camps were kind of saying the same thing, but yet there were important differences. At the extreme, empricism morphed into a kind of immaterialism, the idea that nothing really exists except our experiences. Life is a holodeck. (Today we have scientific proofs that we are more likely living in a simulation, the discussion continues) Samuel Johnson, miffed at what he thought was silliness, famously kicked a rock and said "I refute it thus" The extremes of rationalism were even more silly: announcing that things that could not be currently proved must not exist (Today we have the same argument coming from scientists regarding all sorts of ephemeral, non-reproducible or hard-to-observe events, like high-altitude sprites. Like I said, the debate still continues. One of the reasons that studying philosophy has immediate impact to your life: these discussions are not new and it's better to learn how geniuses in the past felt about them before forming your own opinions. Save a lot of time that way.)
Kant walked into this and came up with something incredible: a both empiricist and rationalist rule of morality. I think it might be useful to us:
Act only according to that maxim whereby you can at the same time will that it should become a universal law.
Kant was saying something very simple yet very profound. If you make a moral choice, make it as if you were making it for all of mankind, everywhere. If you decide to kill in war, then you approve of killing as a means of warfare for everyone. If you rob a man of his money, then you approve of everyone robbing each other of money as they see fit.
Turns out you can become a very moral person without having any religious instruction at all.
Let's apply that to creating technology. (We probably should call it Markham's Something-Or-Another. I'm drawing a blank right now)
Create no interaction with the user that you would not require of every person in the world getting the same ultimate value
That means when Norton pops up each month to "remind" me that their program is still working (and it deleted 100 cookies! woohoo!)? It's immoral. When an app requires me to have an account to talk to my friends -- things I managed to do for a long time before that app ever came out -- it's immoral. When the flash advertisement takes over my screen or makes me look somewhere on a page other than the information I'm looking for? It's immoral.
I'll let you draw your own conclusions about video games in general. Here's a hint: I don't think they are all good or all evil. That's the beauty of the Markham-something-or-another rule: it gives us a tool to look at things in detail, not a list of bad and good things. It's both a rule and a tool.
Will this ever fly? Probably not. posting this article, I am probably very similar to the guy who went jogging everyday back in the 1950s during lunch -- sure it's probably good for you, but it looks awful boring, and there's something just a little odd about not wanting to relax and take care of yourself.
After all, you deserve it.
Hi Daniel,
I'm not sure I understand the basis of your thesis. Are you saying that people who develop video games are immoral? Or that video game developers should control the way customers use their applications?
Should game developers remove all elements of immersion or deep fulfillment from their games? I don't understand what you are trying to say. There are responsible video game players just as there are responsible drinkers. I know, I'm both. I play a video game maybe 6 hours per week, max. And last night I had a cocktail for the first time in months.
How is the social networking application in your example immoral? You have the option of using it, and part of the transaction is creating an account. There are several very legitimate reasons for requesting that you create an account before using an application -- chief among them is that I would not want to be spoken to by someone on the internet unless I know who they are, or at the very least that they will be held accountable for what they say.
But even if the very act of creating an account were immoral. That does not change the fact that the transaction itself is open and optional. It is not your only avenue to speaking with your friends. It is simply one which offers more convenience.
There is nothing immoral about this transaction. That you demand that convenience without payment, however, IS immoral.
Robert,
Thanks for the comment! I knew I'd get dragged into this video game thing.
Nobody is saying things should be free, and this is about technology development, not consumption.
If you read closely my something-or-another, you'll see that -- according to it -- it is immoral to add user interaction unless you would add that interaction for everybody _seeking the same value_. Now it's quite easy to argue that a video game has value in that it is entertainment. Therefore it is quite reasonable and natural to interact with your entertainment -- it's something we would impose on everyone. Or perhaps not, depending on how you define "entertainment". There is some room for further definition.
However, placing ads in a video game might qualify as immoral, since the purpose of the game is entertainment, and some ads might not be entertaining. The developer is using up brainwidth in an unnecessary manner in order to make a buck. Remember, it's the combination between the value the user (and all users) seek, and the nature of the imposed interaction that counts.
Taking FB as an example, having and communicating with friends has a value that is nothing related to watching an advertisement for a pharmacy or having to join a service or give away free pieces of your life.
Here's another example: each month I pay Sprint around 80 bucks to connect to the internet. I give them the money, and in return I expect to be able to connect to the internet anywhere Sprint service is available. When I get the modem, however, I find that it comes with it's own software -- Sprint has kindly provided me with the ability to use GPS and other "location-aware" services. Not that I asked for them. But I figure, hey, since they are already available, it's nice to have a program to let me use them.
But Sprint doesn't stop with just providing me with more stuff. The program auto-starts when I start my computer. Instead of using the traditional modem-control menus on my startup bar, Sprint wants me to use their program to access the net. The program takes time to load. The program needs to update itself on the internet. Newer versions go into my stack and start changing my wifi settings. Why? Because you can't have wifi and a cell modem on at the same time. Good goal, yes, but it causes me to have to go in and reset my wifi settings once a day or disable their program. And when their program connects me to the net? It takes me direct to the Sprint homepage.
Now I have already paid Sprint each month for a service that is valuable to me -- providing me with connectivity. It really requires no interaction on my part. Yet Sprint, under the guise of continuing to help me in greater and greater fashion, is actually increasing my workload managing all their "help".
I have a cheap Vaio computer I bought for my mom that I use from time to time. I have decided not to register it. I will never need support and I need nothing from Sony. But everytime I boot the box -- you guessed it -- the Vaio insists on my filling out a registration form. And I've spent a good bit of time trying to turn it off. Best I can figure is that Sony integrated the registration into their driver software somewhere.
This type of user interaction -- where it's supposed to be an honest trade-off or where the vendor says they're trying to look out for you -- is, in reality, much more self-serving on the vendor's part than it is useful on my part. Dozens of web applications, games, and desktop programs drag me into thinking about stuff that's not normally important in the guise of "helping". In reality, even if 90% of these apps were making an honest trade -- and they're not -- that would still leave 10% of my technology experience full of my participating in some form of stimulus that is being foisted on me against my will.
Now multiply that times a hundred million customers.
I find that immoral. And then when you add the addictive nature of many of these interactions, it's even worse.
These are not even trades. Giving up all your personal information to Facebook to "keep" for you is not worth the free service. Playing Farmville for 20 hours a week is not worth the entertainment value. Of course, and this is important, this is a decision for each person to make. But I can say overall that the trades in most technology interactions are much more rigged to the vendors than the consumers.
I could go on -- there are dozens more examples in my life, and I'm sure there's even more if we stopped and thought about it. Anit-virus programs that want to pop-up all the time to tell you how good of a job they are doing. Even nagware programs that allow you to use them -- as long as they can keep pestering you. Dude, I'm not paying for the app already. Make it go away.
Newspapers and radio pioneered this idea of broadcasters giving away content in return for watching commercials, for taking up mindwidth, and it worked very well under certain conditions. It was fine when the content was external and limited in nature. But now we are reaching the end of where that model is going to work. We are reaching a point in time where our computers are more and more part of our brains, however, and we consume content all of the time. Computers are not the passive, optional things that radio, TV, and even newspapers are. It's more like we're selling off pieces of our mindwidth to pay for things. We're trading off pieces of our brain for little bits on a server somewhere. Arguing at the extreme, this can be considered a form of involuntary slavery.
There's an important point here that I think the law and society have not realized: computers are not media devices in the same way a TV or a CD player are. They are more like brain augmentation devices. Because of this, the way that computers and technology interacts with people can have a massive effect -- most of it good, but some of it bad -- on tens or hundreds of millions of people. Nothing up to this point in human evolution has been like this. It is as if we were building hats that made people twice as smart -- but in return took away parts of their life by providing them engaging and addictive material that most of them find difficult to resist in this format. Up until about 30 years ago, such hats would have been considered immoral. Terrifying even. Now it's considered strange in some fashion to question the way things are.
As far as payment, it's much better to pay for what you want and to be very clear about what you are trying to accomplish. I think if you keep in mind the goals that the user is chasing -- the nature of how they occur without technology, then, as a technology developer you can design moral interfaces around those goals.
Of course I'm just proposing this as a starting place for the conversation. I don't have all the answers -- hell I don't even have all the questions -- but I _think_ we're going to need to evolve a new morality to deal with these new situations. This was a first stab at that.