This brings up the issue of protection and its relationship with privacy. In worldview 1, we let the church protect us. This evolved into a more state-sponsored role as worldviews progressed. In the next worldview, we might need to protect ourselves. Our identity’s are now what’s at stake, and something as large as the national government exists at too macro a level to be the protecting entity. It’s going to be an integral part of the new worldview to define how government and the individual co-exist. In many ways, governments are transcended by the seemingly limitless bounds of the Internet, and yet, it is the government that can best restrict our access and experience on the Internet. Should they be allowed to do so? Is China justified in blocking content with which the ruling party doesn’t agree? What about our government and its domestic spying practices? The argument from both entities is that they are protecting the masses from potentially harming themselves. If the goals were simply this altruistic, none would have cause to worry. However, it is often the case that information is misused and corrupted to serve a purpose. Can the government be trusted to not conduct a McCarthy-esque digital witch-hunt? The ability to protect an identity on the macro scale is too large a task. Therefore, I think that privacy and protection become roles relegated to the individual on a micro scale, and the government should set the guidance for proper governance of digitally gathered information.

Let me pause and define privacy. Privacy is, “an individual’s desire and ability to keep certain information about themselves hidden from others” (Fule and Roddick). The important thing to remember here is that privacy online delivers back to the individual their humanity offline. Protecting this privacy is a critical part of any future where reality and the Internet must co-exist.

Is it possible to data mine and exclude sensitive information? Fule and Roddick seem to argue that this can be done with improved rule sets for selecting data. Also, one has to understand the rules that they are constructing and see where potentially harmful information can be found. This is analogous to saying that we must predict that whatever we create can be used for harm and take every step imaginable to make sure the harms that we imagine cannot become reality. As has been demonstrated throughout the course, to completely think of every use of a product is impossible. Even if this were possible, their solution requires the user to interface with the data-mining algorithm to make critical decisions. Oftentimes, it is the user themselves who does not understand the information and its implications. This partial solution, and the derivatives that have come from it since, is the reason why I say that a technical solution is not possible.

I think that there are professionals in computer science who agree that this is a social issue that must be discussed. Above all else, it is an issue of identity (Woodbury). Am I willing to sell my identity for the convenience of anonymity? The key to answering that question is knowing. I have to know to whom I am giving my information, to whom they will give my information, and how anyone will use my information. Woodbury stresses that we cannot trust that Internet businesses will follow ethically acceptable business practices. It may not be in their best interest to secure user data, and as a result, the user’s information may be up for sale to the highest bidder. Knowing that, what are you willing to share? Your identity should not be a commodity sold to the highest bidder, but by doing business on the Internet, you are risking this exact issue. The consumer must demand from the company that their identity be protected.