Back to big brother, is there something that may be called big brother on the Internet?
Yes, there is.
Is not what you're thinking of, there's not such thing as a huge database with all our names and a log of each and every session we do on the net. At least I hope not. But there are other things that, in the common user perception, are almost as scary as that.
Right now you're being under my scope. This blog has a log feature (courtesy of eponym) like any other web server. This log tells me where you come from, your IP address, what browser are you using, the timestamp, what you requested, if you clicked on a link and where was the link, etc. It doesn't say much about you yourself. I wouldn't be able to follow your steps unless you use exactly the same browser from the same IP and even then I wouldn't be sure it's you all the time. The idea of this log is to help the owner administer the site, check resource requirements, adjust the design of the page to serve all the diferent browsers, etc.
But let's say that I give you one option to "improve your reading experience", something like choose your own font, choose your own background color. Unless I can identify you, you'll have to repeat the choice every single time. One option for this would be to make you open an account and save your preferences. The other is a "cookie".
A "cookie" is a piece of data related to a site that is stored in your computer. The cookie allows the server to recognize you from request to request, remember your preferences and follow your steps.
But before going further into this, there's something you have to understand about web servers. Let's say that you log into your webmail or go to a news web page. You spend some time in there and call that a session. The server knows you, because you said who you are, or not in the case of the news page. But in both cases you notice that the service was oriented to you. Your webmail allways shows your inbox with your messages and sends in your name. The news page allways shows those headlines related to the topics you had chosen previously. You don't have to identify yourself or repeat your choices every single time you open a new page.
However, from the technical point of view, a web session is a request for one element and one element only. When you open this page, you send a request to the server for the index.htm document. The server sends you that file and you close the session (your browser does). The index.htm file is just the text and the format of the page, you can check it out with the option View Source in your browser. Once your browser has the HTML file, it starts asking for the elements required to build it for you. The images, java scripts, any multimedia file, etc, they're all referred in the HTML file and requested to the server one by one in different sessions. By session I mean a TCP/IP session, your browser opens the session, the server acknowledge your request, your browser sends the message requesting one element, the server sends the element, your browser closes the session. Up to this point, this is what HTTP protocol does, no more, no less.
The protocol itself has no way to know that it's you through all those sessions and for the most plain and simple pages it hasn't the need to do so. Like in this page. Any request of any element will be served exactly the same regardless of the client. But if you're using your webmail or your bank account, the server needs to know who you are in order to build a page with the information relevant for you.
The cookie does that, the server creates a virtual session, assigns a code to it and sends it to your browser in a cookie. Every time your browser sends a request to the server, it sends the cookie too. The server knows that that particular cookie was generated and sent to you at the time you identified yourself, hence any request bearing that cookie must has come from you.
That's a session cookie, is good only during that session. The cookie is created with a short lifespan, in the order of hours or minutes, and should be discarded when the browser closes.
There are also persistent cookies, cookies with a long lifespan, even beyond reasonable limits that we can call eternity for practical reasons. Those cookies are the ones used to "improve you browsing experience". They store your site preferences but most of the time is just one code, an ID code. The server stores your preferences and link that set to the ID code sent to you in the cookie. From that point on, all your requests include the cookie, the server looks for you preferences and personalize the page for you. Nice, isn't it.
Also now, the server is able to track your steps from session to session. Let's say that you visit your favorite bookstore and spend some time looking for books about gardening. Then, on your next visit, half of the books highlighted in the front page are about gardening. Have they read your mind? Is this a case of Jung's synchronicity? Of course not, your browser now has a cookie and your cookie has been linked to many search requests for "gardening". The server does it to improve you "shopping experience". And to make it more likely that you buy a book.
Now this seems intrusive, they're really tracking your every step, what you look for, what are you into. Yes, is true. But unless you open an account with them and identify yourself with your real information, they have no way to know who you are. And, most likely, they don't care.
Is that so bad?
I bet that there's at least one store where you drop by frequently. A coffee store on the way to work, a deli, a drugstore, a tobacco store. If there is, chances are that you're served before ordering most of the time. The server acknowledge your "cookie", you, the real you, and has it linked to your preferred mokachino, sandwich or cigarette brand. We don't see this as intrusive. However, a stranger is aware of our preferences, where we buy, when we buy, what we buy.
The difference is in our minds. The desk clerk is human, the server is not, we have a natural inclination to trust humans and distrust machines. On the other hand, we don't pick up a porn magazine in front of a human clerk but we take it from the server that we distrust.
I know, is not easy to understand. But the human mind is too complex to be explained in this blog.
Moving forward with Internet and privacy.
So far, we've been through some of the ways a server can look over our shoulders. None seems to be really scary. Even a persistent cookie looks harmless, it doesn't carry our identity, is limited to the server who issued it. And you have many ways to avoid them.
In you browser settings is an option to set policies for cookies. The options change from one brand or version to the other, but basically are whether accept cookies or not, what to do with them and a list to discriminate servers for specific actions.
Nowadays, a policy to reject cookies is a bad idea since most sites involving long sessions, like webmail or shopping sites, rely on cookies to operate. So at least you have to allow session cookies, optionally you can designate the sites you use. Also you can have a list of sites that you want to keep your preferences. Then you either block all the rest or set a policy to delete all cookies when the browser closes or go to your browser settings and delete them yourself.
That's a good set of policies if you're worried about cookies. I prefer to delete them myself, but not from my browser settings, I go to the cookies directory and take them all out.
So you can do the same, go to your Document and settings directory, there must be one with your profile name and in there a Cookies directory. This is if you're using a 32 bit version of Windows or later, other operating systems and browser may have their own separate directory. Anyway, you'll find a list of files, most likely with your_name@some_domain. Each file is a cookie related to that particular domain, so you'll find there a list of some places you visited and some you didn't. Yes, you've read that right, some that you THINK you didn't. I'm sure that you've never been to 2o7 or doubleclick or zedo or webtrends, and the list is a lot longer than this.
Now you must me wondering how this happened, I said that you get a cookie when you visit a site, you only get a cookie related to that particular site and your browser sends cookies only to the site they belong to. And all of this is completely true, at least I hope so. The answer is that you visit a huge number of sites without knowing it.
Let's go back for a second to the HTTP session. When you ask for a page, the server sends you the first element, the file of the page itself. It contains all (or most of) the text, formatting information and the references to all the other objects. But those objects could be, or not, on the same server. So you get your page from server A, the HTML text says that an image is required and that is located at server B. Your browser opens a session with server B, exchange cookies if needed, and gets the image. Meanwhile, you've visited a site you didn't explicitly ask for.
This is not against any rules, it's totally normal although unexpected for the common user. Some of this links are used just because the page requires that element from other server, for example some forum pages don't allow users to store avatar images on the server. You have to store it somewhere else and configure the link in your profile. Every time a page has to show your avatar, includes the link to the server you designate for that. These cases most likely don't use a cookie.
Most of the links that use cookies are advertising, pages that have contracts with doubleclick or zedo are paid for setting a link on their pages. Every time you request a page, a request or more are sent to the advertising server for the elements required to complete the page. Those elements may be allways the same, or changed frequently or rotated among group of ads. Those servers need to keep track of each and every request made to show result to their clients and pay to the page owners. They set cookies for many reasons, they want to know how many different persons were exposed to each ad, they want you to see as many different ads and, if you clicked one, they want to send you those ads that you're more likely to click.
Remeber that one rule of the cookies is that they're only related to one site? They are. The cookies from ad server A are and will be exchanged only with server A. The problem is that server A is being referred from sites B, C and D, the sites you're visiting. Now, server A can tell when and where you visit each of these sites, if you pick an ad from B they'll send you related ads when you visit C and D.
This is targeted marketing and I doubt they use it for any other evil purpose. In fact most of them just control the number of exposition for each ad, balancing diversity and quotas, showing each user as many different ads as possible and reaching the goals required for each paying advertiser. The selection of topics is done beforehand, porn ads in porn site, foods and wines in epicurean sites, etc.
Google does this topic analysis for its AdSense program. The topics are chosen based on the statement of the site owner who subscribe for the program but also by the content. It's not very accurate. Suposse that you have a site about the red lobster of the south Pacific (I have no idea if such thing exists), you're trying to bring awareness to the general public about this creature in danger of extintion due to excessive fishing and habitat degradation by human activities. AdSense could fill your site with ads about lobster restaurants, fresh lobster on sale and lobster recipes. But taking into account the huge number of ads showed up every minute, the results are good. Otherwise, people won't pay for it or take is for their sites.
I don't know if Google is doing what I'm about to mention, if it starts to do it I hope they send some money my way. The system gets more accurate as more users choose ads. In the lobster case most users would ignore the ads, making them less likely to be reassigned to that site. On other sites, where the ads match the content of the site and the interest of the visitors, the click rate is high making them more likely to be assigned to that site and others with related content or linked from there.
I don't like ad laden sites where you have to dig for the content you're looking for, not mentioning those sites that are ads, no content. But at some point I have to compromise. I like the idea of having free web sites with content I can use, news, recipes, instructions of any kind, reading material. The owners of the sites need an incentive to keep doing it and the money is THE incentive. Web sites with ads are a good thing because they'll keep those sites free for everyone else, however, small sites don't have the mass of visitors required to negociate with advertisers directly. Ad servers filled that gap, dealing with a large number of sites in hand that can provide that mass of visitors for the advertiser.
The last group of the unkonwn cookies in your directory (and mine) is the most scary of all. This is the one we can call Big Brother. I know for sure that you have at least one 2o7 cookie. And the reason why I know that is because almost all the most popular sites have links to it. The owner of those cookies is a company called Omniture, probably the biggest of its kind but not the only one. Omniture is doing statistical analysis. They basically count every single time one of their links is requested and relate it to the connected cookie. Each time a link is requested, they know if you have one or more than their cookies (if not they send you one right away), what page you've just opened, the time of the request, the server who served that page, your browser brand, some of the basic options you have set and some other minor information. This information doesn't seem to be valuable at first, it doesn't include your identification and I don't think they really care about it. But if you put it togheter with all the millions of little bits of information, things looks very different. Of course, it takes talent to make out valuable data from such a huge pile of bits and Omniture seems to have it, being the most successful in its class.
Evil as it may seems, there's nothing wrong with it. Let me rephrase it, I can think thousand reasons why is wrong to do that, but not one related to the privacy of the users. The owners of the sites has the right to know at least how many times their pages are visited, they even have the right to know who is reading their pages. Some do and request you to register and ask for your name, your address, your phone number. Some even go further and request evidence of your identity to register. But it's your choice to do so. Once you voluntarily access one site, they own that bit of information about you.
On the practical side of the matter, your identity means nothing. There's no sense or need to know who you are. Statistics and statistical correlation have no meaning unless the number of events measured is huge.
Let's say that you have a die, you know that the odd of having a certain number in a throw is 1 in 6, one sixth. You assume that all of the numbers have the same probability. You throw it once and the probability of having any number is the same for the next throw. However, statistically, the number you've got on the first throw should be slightly less probable because, in the long run, all the numbers should appear about the same number of times. Sounds like a paradox but it isn't, the uniform distribution after a large number of events is a consequence of those events having the same probability. The key here is the large number of events because, as any Yahtzee player knows, rolling the same number many times in a row is possible. But if you roll the same die six thousand times, you should get each number about one thousand times. A small deviation is expected but if you get something beyond 2 or 3 percent, you better get that die checked.
Statistical analysis is based on this. Human behavoir can't be calculated in terms of probability, at least not before hand. But if you measure some event a large number of times, you can infer the probability from there.
I'll give you one example of correlation. Imagine a graph showing age of the people against a list of sites that people visited for a period of time. After you plot the first 10 points, that's what you've got, 10 points scatered across the graph. While the number increases, you can start to see trends or that there are none. A site with an even distribution of points along all the age scale, has no correlation with it meaning that age is not a factor for that particular site. If a site is more popular among people of a certain age, that part of the line have a higher density of points. And same going across the age's scale, sites more popular for each age segment have a higher density of points.
Not so many years ago, statistical correlation wasn't so popular just because it wasn't easy to get large number of measures to analyze. Of course some statistical analysis was done, but on most cases the number were not big enough to make the analysis accurate.
Internet changed that. Not only you can get millions of millions of measures, you can get millions of different events. Even more, you can link different events to the same person. It doesn't matter who he or she is, what's important is that those events are related to the same person. And, best of all, recollection of data is done automatically.
As you can see, someone's looking over your shoulder while you surf around the Internet. I think that marketing is evil, this kind of marketing is even worse than evil. But not because our personal privacy is being violated, I don't think it is, is because our collective privacy is being violated. We, as a human group, are being closely watched, scrutinized and disected. But I won't complain, I'm still feeling that we're far away from 1984.
One last comment about Omniture. If you go to 2o7.net, you'll get to a page where Omniture explain briefly the meaning of all those links you find on some other site's pointing at 2o7.net. Don't expect an apology. They do this on behalf of their customers, the web sites, so you go check the privacy policy of each of site. And they're right.
The funny thing is that they have at the end of the page a link that allows you to opt out the system. If you don't want to be watched by them you just have to click there... and get another cookie.
This is a blog dedicated to talk about spam, scam, phishing, fake banks and other nuisances of the Internet. It's also a chance to practice my written english, I need something that forces me to write at least a small bit everyday. Corporate english classes were not helping. My name is an homage to Wolfenstein, the game that started it all, and Bond. Enjoy (or not...)
7/07/2006
7/05/2006
More scammer's mail addresses
I've been neglecting those who were kicked out of their mail servers. I'm sure they'll be back soon. Meanwhile I like them to see their names listed here.
rev_will_kingsley147@yahoo.com
k_kelleysassociates@yahoo.co.uk
emma5050tg@yahoo.com
musa_ali01@latinmail.com
favormonic@yahoo.com
albert_abossi60@yahoo.com
hamar122@yahoo.com
john_imoh3@yahoo.com
barrister_dede_1@yahoo.co.uk
maryann_prety@yahoo.com
maryann_preety@yahoo.com
louisa_chris24@yahoo.co.uk
kietachedom3@yahoo.com
sussybangy_001@yahoo.com
coleken10@yahoo.com
georgekofi40@yahoo.com
justice_ng11@yahoo.com
larryobe30@yahoo.com
julien.kodila@yahoo.com
goodwave01@latinmail.com
jennifer.stephens17@yahoo.ie
brown_walter004@yahoo.co.uk
hamar122@yahoo.com
zhang_wakenge18@yahoo.co.uk
jacob_molak2006@yahoo.ca
barristeredwardjones2@yahoo.com
rev_will_kingsley147@yahoo.com
k_kelleysassociates@yahoo.co.uk
emma5050tg@yahoo.com
musa_ali01@latinmail.com
favormonic@yahoo.com
albert_abossi60@yahoo.com
hamar122@yahoo.com
john_imoh3@yahoo.com
barrister_dede_1@yahoo.co.uk
maryann_prety@yahoo.com
maryann_preety@yahoo.com
louisa_chris24@yahoo.co.uk
kietachedom3@yahoo.com
sussybangy_001@yahoo.com
coleken10@yahoo.com
georgekofi40@yahoo.com
justice_ng11@yahoo.com
larryobe30@yahoo.com
julien.kodila@yahoo.com
goodwave01@latinmail.com
jennifer.stephens17@yahoo.ie
brown_walter004@yahoo.co.uk
hamar122@yahoo.com
zhang_wakenge18@yahoo.co.uk
jacob_molak2006@yahoo.ca
barristeredwardjones2@yahoo.com
7/02/2006
Big Brother - Intermission
This is an article out of schedule, I had this topic in mind but for a later time. However, this issue is urgent and requires all our attention today. I'm talking about neutrality.
Neutrality is a not a concept easy to understand, mostly because there's no such thing. Neutrality means that each and every packet that goes through the net is treated equally.
The Internet doesn't have neutrality, neutrality is natural, not meant or produced by human action. The net is neutral because is doing nothing to avoid it. And the big issue now is that the ISPs want to change that, they want to change the rules and treat some packets differently.
Here's the idea. According to the ISPs the major problem with the Internet today is that no bandwidth is enough. Not so many time ago we did fine with a 14 Kbps modem, some of us started with a 300 bps modem, either way we were able to use the net with the services available at that time. Soon we moved to bigger modems, 28 Kbps, 33 Kbps, 56 Kbps. Is arguable why, was it because the technology allows us to do so? was it because the requirements of the services available grew? But the point is that going this way (according to the ISPs) no bandwidth will be enough to insure the quality of the services as their requirements keep growing. Today is not out of the question to have a 2 Mbps Internet connection in your house, think about it, is over one hundred and forty times that old 14 Kbps modem.
The solution proposed is to break the neutrality of the network and give some packets priority. This way the services that requires immediate attention will allways work and those with less urgency will be delayed. They can prove mathematically how this works and how happy we'll be with the new improved optimized Internet.
On the other side, the neutrality advocates, show a different scenario. The priority of the packets won't be determined by technical service requirements but commercial agreement. The major players of the Internet will pay for priority. This way, the X search engine pays for priority and the Y one doesn't, if you access X you'll get immediate response while if you access Y you'll have to wait. It could be a search engine, a video streaming service, an e-mail service, anything. The point is that those who can pay for priority, and are willing to do so, will have a differential treatment that makes their services more apealling to the final user. The aftermath will be that all the small players will fade and die.
You're probably wondering which side I am or thinking that you know already. Either way you're wrong, I'm about to crash both sides.
The priority advocates are using the quality of the service as base for their arguments, however, one of them was very clear when he said "Google is making a lot of money using our bandwidth". So, the quality of the service is not the main concern. They see that there are people making money, and making big money, using their infrastructure and they want a piece of the action. But they have it already, they're being paid by Google and all the other content providers, directly or indirectly, and by all of the final users, directly or indirectly. Without all those putting content available for the final users, the business of the network itself wouldn't be what it is today, wouldn't be as profitable as it is today. They just want to get more money, they're not increasing the value of the service, they're about to decrease it by limiting the access to the content.
The technical proposal they use to hide their real intentions is asinine to say the least. According to them, giving priority to packets with higer speed requirements will insure the quality of those services and keep the network less cluttered in a way that all the traffic will flow more easily. They mention among those services, the communication of emergency services, remote critical operation (like surgery), video and audio. Let´s take a look at them one by one.
I didn´t know that emergency services were using the Internet to communicate. I think is fine, as I said before, Internet is fast, easy to use and reliable. But not for emergency response. There's a lot of things they can do over the Internet like surveillance cameras, web sites for public information, email for non critical communications. For times of emergency they need real time coordinated communications, like the one they have already in radios and telephones. Even if they need networks they can use their own equipment with land lines if they're available or can be set or with wireless communication. They can use the services of the same carriers that want to prioritize the emergency traffic over the Internet, using segments of network not shared with the Internet. In brief, emergency services have their own communications and, if needed, have to develop new ones. Internet may be a non critical support service, even a backup system, but it wasn't designed for that use and shouldn't be used that way.
Same goes to the remote operation of surgical instruments. I don't know who was the genius behind this idea, the phrase he used was something like "if there's a human being in the operation table we don't want the packet that will save his life to be late". Well, I don't either, so I have a couple solutions for that. First, if you're about to do surgery in a human, try to be there. If there's no way to get there for physical reasons or your busy schedule and there's no other chance to save his life, the second solution is to get something better than the Internet. There are so many choices, including one that the same people that want to prioritize your traffic can give to you, a private network. Again, the Internet wasn't designed to do that, it's not reliable for that kind of real time critical operation.
The other services; not being so critical by itself, like video, audio and telephony; have the same problem. The conversion from a stream of analog data in real time has to be digitalized and packetized to be sent through the Internet and then reconstructed at its destination. If the packets are delayed, the quality of the service is degraded. The video freezes, the audio makes distorted sounds. But that's the way the Internet was designed, it's not reliable for streams. It's not a flaw, it's how it was created. You can't cut your steak with a fork, it's not a flaw of the fork, you need a knife. And we have just the perfect knife. If you want video in real time, easy to operate, cheap and reliable, that technology is available already. It's called TE-LE-VI-SION. If you want audio in real time, easy to operate, cheap and reliable, that technology is available already. It's called RA-DIO. And if you want telephony in real time, easy to operate, cheap and reliable, that technology is available already. It's called TE-LE-PHONE. And the beauty of all this is that all these technologies were designed specifically for that, they're not being adapted, modified or "prioritized" to deliver. They work just fine and have been doing so for many years. Since they were created they have been improved and they'll improve even more in the future. So why are we so eager to painfully transform something not fitted for a job into something able to do it. Even worse, do the maths for the final user. We'll be trading our one hundred television sets for one thousand dollar computers, our ten dollar radios too. What's the point? And don't get me wrong, I think is great to have some video, audio and telephony over the Internet. I'm happy to get so much from a network that wasn't originally designed for that. But if I want to see a movie I go to my TV set, if the movie I want is not on I go to the video club an rent it, and I can do it using the Internet which is cool. If I want to listen to the radio I turn on the radio, if I want to talk to someone I call him over the phone. And if I have the chance to talk with someone too far away using the Internet, great. It's cheaper than the phone too and it makes me so happy that I don't care if the sound is not crisp and crystal clear. It's more than enough to achive communication and that's more than I was expecting from the Internet. What about you?
At this point it seems pretty clear that I'm with the neutrality advocates, but I'm not. They want the government to regulate and insure neutrality and I don't want government regulation. The carriers own their networks and as owners they have the right to do with them whatever they want to. If they want to provide traffic prioritized by any rule they want, they're entitled to do so. It's their networks we are talking about. The rest of the world have the choice of buying service from them or not. It's that simple, any other point of view is an outrageous violation of property rights. We are used to it because our own rights are violated on a daily basis, but piling up another violation won't fix the problem. I think we have to let the carriers do what they want to do with their networks, we have to respect their rights.
There are also some technical and practical aspects that have to be taken into account. Neutrality advocates would say that my position of defending the rights of the carriers over all the rest will damage the Internet, and I agree in part. But they have to understand too that neutrality doesn't exist today and never really existed.
Every owner of a network have the ability to regulate the traffic inside it. I, for example, have full control of my network. My link with the Internet is totally under my control and I can decide how much bandwidth is available for each service or if a particular service is blocked. And I do it, for practical reasons. Services that are not authorized by the company policy are blocked, webmail pages that refresh too often are restricted in the amount of bandwidth they use, services to customers and contractors are prioritized. Your ISP probably is doing the same with a different criteria. Most likely it has a page, a main portal, with links to content, to your webmail, a search engine and advertising. They want you to use it because is the only way to make the advertising space valuable, so they privilege the traffic to and from that portal. It's not a big deal anyway, the portal is inside their network, transit time is practically null, so it will respond (it should) a lot faster that any other page from the outside. Add to that all the sites that are paying for hosting service to your ISP, they all are inside the same network and privileged by that condition over any other site from the outside. In a way, your ISP is breaking the concept of neutrality even if they don't explicitly prioritize the internal traffic. Now take the same case to a whole country. One with a decent backbone, meaning that all traffic from nodes inside the country is handled inside the country. Believe it or not most of the countries don't have such a backbone. Some countries with primitive communication infrastructure grew in satellite links, the lack of landlines made the satellites a more affordable alternative. Two ISPs there, located one next to the other, may be linked to different satellite services. Let's say a country in Africa with a link to a satellite over the Atlantic with land station in the USA and the other to a satellite on the west with land station in Israel. One packet sent across the street will tour around the world. Going back to the country with a decent backbone, all the sites inside that country will be more accesible than the foreign ones.
And that's just the technical problem related to the nature of the network, its structure. To that we have to add the difference in bandwidth and processing power between sites. Let's say that you try to set your own search engine in your computer using your 1 Mb Internet connection. You may have the best one, be better than Google, and yet fade and die strangled by your resource limitation. It would take you a million years to visit all the sites in the web, even more time to analyze and store the relevant information for the searchs, you wouldn't have enough space to store it no matter what kind of compression you use plus all the time and overhead processing required to do that. Add to that the main purpose of the site itself, serve customers with information. It's obvious that you won't be able to do it while gathering information but, even not doing it, your capacity would be limited to a few hundreds.
Neutrality is broken by the difference in resources between sites. Sites with more processing power, more bandwidth are able to serve more customers faster and with better services. And that's being paid by the sites, they pay the carriers and the ISPs for the privilege of more resources. The bigger the business is, the more need for resources it has, the more chances to grow, hence it will invest even more. Neutrality doesn't exist today, those who can pay more are doing it, they're getting more service for the money they are paying and using that to give more service to the final users.
Finally, how are they going to make prioritization to work? I don't want to go all the way back to the very basics of networking. Let's go back to the city analogy. Today the postmen do their rounds at their own pace picking up as many packets as they can and delivering evry time they pass the corresponding door or intersection. If their storage space is full, the packets that can't be picked up have to wait untill the next round, every door or intersection has a queue where the packets are stored for the postman in a certain order. That order is by default the time of arrival, the queue is serviced first in - first out. The methods used to prioritize traffic on a network are basically two. One would be an extra postman dedicated to priority traffic, most likely a faster and bigger one, able to do its round in less time and to carry more packets at once. To do that, the queues at every exchange point are doubled, one for each postmen. The other method is use the same postman but specially trained to be picky about the packets. This postman has to decide at each point which packets pick up first, he can't just take from the top of the pile. He has to go through the queue and pick the priority packets first and then the rest. Also, he can have a separated storage space that's reserved only for priority traffic. If that space is full, he can keep picking up priority traffic using the general storage but never use the reserved space for general traffic. This is way there's a minimum bandwidth allways available for priority traffic no matter how bad is the traffic condition.
It seems simple but is not. It works fine for a simple network but the Internet is not. As we saw before, Internet is a huge group of networks interconnected, every one with its own rules and management. As long as they agree in the protocol used to exchange packets (IP Internet Protocol) they can do whatever they want with their own internal network. I do set priority traffic inside my network, I have the means to move certain packets with a minimum of bandwidth guaranteed. But at the point where my ISP is picking up my packets it doesn't matter if I set many queues, my ISP is servicing me with only one postman. I can make an agreement with them to have an extra postman, but that would work up to the point where my ISP network has to exchange those packets with someone else. This kind of agreement with ISPs is very common like in my case. Let's say that I have a branch of my company in a place to far away to do my own network but with access to an access point of my own ISP. Being an extension of my own office I'd like to have that traffic prioritized over our traffic with the rest of the world. My ISP can do that inside its network just setting the configuration of its own postmen. Any other case involving a third network would require another agreement.
Suppose that for some reason you want to have priority traffic with certain site located at the other side of the world. You won't find a route from you to that site with less than three different owned networks, in fact you'll go through many more but for the sake of this specific problem we can assume that interconected networks of the same owner can handle priority traffic as if it were only one network. And I said three because is a theoretical minimum for almost every case around the Internet, your ISP, a carrier and the ISP of the destination site. Big sites are usually closer to the backbones in terms of hops (number of times a packet has to be relayed from network to network) because they're serviced by the carriers directly. These sites are the main target of this new idea because they're the ones who can afford to pay for priority and get some advantage from it. If one big carrier gives priority to site A, every ISP connected to that particular carrier would be receiving site A's traffic on top of their queues regardless of the policy they have in their own networks. Even other carriers around the world would get site A's traffic on top. But that's it, from there on, site A's is handled as any other traffic. As you can see, only one network giving prioority traffic is not a huge advantage.
If several carriers agree in giving priority to certain packets, the scenario changes just because of the extension of the service. More exchange points will see site A's traffic on top of their queues. The problem here is who's selling priority and how are they sharing the business. In my opinion, if it gets implemented sometime in the future, the system won't go much beyond the United States and its satellites. The number of big carriers in there is limited and, if they get gubernamental support, it's easy to reach an agreement. But once they have to deal with carriers outside of that circle things start to get more complicated. The big players of Internet service are in the USA mostly, Google, Yahoo, Microsoft. They're the ones who would pay for priority. The carriers outside the USA would find themselves giving a valuable service to those sites and nobody to bill for it. I don't think this would make the priority system fail, just keep it contained inside the USA. Because most of the final users that would be benefited (or punished) by priority are in the USA. Plus, the regulations of the USA government won't make much difference outside of it.
One last point to think about is how are the sites reacting to this. I can imagine some jumping into the priority wagon without even thinking. But is this such a good idea?
Let's take a lok at it from the final user perspective. Let's say that site A wants to improve its service trying to compete with site B. Site B is more popular, has a bigger share of the traffic, has been chosen by the final users by its content, its quality of service. Now with site A being prioritized, packets to and from it goes faster. Site B is still working fine but its packets enqueued behind site A's packets. How much difference would it make? If site B is so popular over A we can expect to have only a few A packets and a lot of B packets. In average the delay generated by those few packets will be hardly noticed. Priority of traffic won't make a quality difference between competing sites. Final users are choosing based on suitability of the service they get from one site or the other. Google is the most popular search engine not because is the faster, it's because people find stuff using it. Once you see it works, that you get what you were looking for, you go over and over to get what you need. If it fails you go somewhere else. Sites with other type of content work the same way, would you read a lousy writer just because its book is available faster? or you'll go to read what you want? do you pick a movie because is just about to start? or you wait for the one you want?
To make a real difference of service through priority traffic, two sites have to be of the same service, same popularity, same content, I'd say almost identical. So site A pays to get an edge over B, what if site B decides to sign in for priority too? And once one of them or both pay for priority, how are they going to measure that they're getting it?
Of course, if the priority system is established, sites like Microsoft's will sign for it. This is seen by most people as corporative stupidity but it isn't. If you're a small site, you have to evaluate the possible consequences of paying for priority before signing in. And you have to establish a way to measure the result. That's basic management. Microsoft and other big corporation, on the other hand, can waste huge amounts of money in order to stay on top. They won't risk the chance of falling behind, it's more affordable and eficient for them to pay before and analyze later. You can say whatever you want about that policy but the truth is that Microsoft has been the leader in the market of operating systems and productivity tools for decades. But for those who have to evaluate results and get a positive result, paying for priority will be dissapointing. At least that's my view.
As a conclusion, I don't aprove gubernamental intervention or regulation. If the carriers want to establish a priority system and charge for it, they're entitled to do so. If sites want to pay for priority they're entitled to do so. In my opinion, the system won't work because is not the solution for something that's not really a problem.
Neutrality is a not a concept easy to understand, mostly because there's no such thing. Neutrality means that each and every packet that goes through the net is treated equally.
The Internet doesn't have neutrality, neutrality is natural, not meant or produced by human action. The net is neutral because is doing nothing to avoid it. And the big issue now is that the ISPs want to change that, they want to change the rules and treat some packets differently.
Here's the idea. According to the ISPs the major problem with the Internet today is that no bandwidth is enough. Not so many time ago we did fine with a 14 Kbps modem, some of us started with a 300 bps modem, either way we were able to use the net with the services available at that time. Soon we moved to bigger modems, 28 Kbps, 33 Kbps, 56 Kbps. Is arguable why, was it because the technology allows us to do so? was it because the requirements of the services available grew? But the point is that going this way (according to the ISPs) no bandwidth will be enough to insure the quality of the services as their requirements keep growing. Today is not out of the question to have a 2 Mbps Internet connection in your house, think about it, is over one hundred and forty times that old 14 Kbps modem.
The solution proposed is to break the neutrality of the network and give some packets priority. This way the services that requires immediate attention will allways work and those with less urgency will be delayed. They can prove mathematically how this works and how happy we'll be with the new improved optimized Internet.
On the other side, the neutrality advocates, show a different scenario. The priority of the packets won't be determined by technical service requirements but commercial agreement. The major players of the Internet will pay for priority. This way, the X search engine pays for priority and the Y one doesn't, if you access X you'll get immediate response while if you access Y you'll have to wait. It could be a search engine, a video streaming service, an e-mail service, anything. The point is that those who can pay for priority, and are willing to do so, will have a differential treatment that makes their services more apealling to the final user. The aftermath will be that all the small players will fade and die.
You're probably wondering which side I am or thinking that you know already. Either way you're wrong, I'm about to crash both sides.
The priority advocates are using the quality of the service as base for their arguments, however, one of them was very clear when he said "Google is making a lot of money using our bandwidth". So, the quality of the service is not the main concern. They see that there are people making money, and making big money, using their infrastructure and they want a piece of the action. But they have it already, they're being paid by Google and all the other content providers, directly or indirectly, and by all of the final users, directly or indirectly. Without all those putting content available for the final users, the business of the network itself wouldn't be what it is today, wouldn't be as profitable as it is today. They just want to get more money, they're not increasing the value of the service, they're about to decrease it by limiting the access to the content.
The technical proposal they use to hide their real intentions is asinine to say the least. According to them, giving priority to packets with higer speed requirements will insure the quality of those services and keep the network less cluttered in a way that all the traffic will flow more easily. They mention among those services, the communication of emergency services, remote critical operation (like surgery), video and audio. Let´s take a look at them one by one.
I didn´t know that emergency services were using the Internet to communicate. I think is fine, as I said before, Internet is fast, easy to use and reliable. But not for emergency response. There's a lot of things they can do over the Internet like surveillance cameras, web sites for public information, email for non critical communications. For times of emergency they need real time coordinated communications, like the one they have already in radios and telephones. Even if they need networks they can use their own equipment with land lines if they're available or can be set or with wireless communication. They can use the services of the same carriers that want to prioritize the emergency traffic over the Internet, using segments of network not shared with the Internet. In brief, emergency services have their own communications and, if needed, have to develop new ones. Internet may be a non critical support service, even a backup system, but it wasn't designed for that use and shouldn't be used that way.
Same goes to the remote operation of surgical instruments. I don't know who was the genius behind this idea, the phrase he used was something like "if there's a human being in the operation table we don't want the packet that will save his life to be late". Well, I don't either, so I have a couple solutions for that. First, if you're about to do surgery in a human, try to be there. If there's no way to get there for physical reasons or your busy schedule and there's no other chance to save his life, the second solution is to get something better than the Internet. There are so many choices, including one that the same people that want to prioritize your traffic can give to you, a private network. Again, the Internet wasn't designed to do that, it's not reliable for that kind of real time critical operation.
The other services; not being so critical by itself, like video, audio and telephony; have the same problem. The conversion from a stream of analog data in real time has to be digitalized and packetized to be sent through the Internet and then reconstructed at its destination. If the packets are delayed, the quality of the service is degraded. The video freezes, the audio makes distorted sounds. But that's the way the Internet was designed, it's not reliable for streams. It's not a flaw, it's how it was created. You can't cut your steak with a fork, it's not a flaw of the fork, you need a knife. And we have just the perfect knife. If you want video in real time, easy to operate, cheap and reliable, that technology is available already. It's called TE-LE-VI-SION. If you want audio in real time, easy to operate, cheap and reliable, that technology is available already. It's called RA-DIO. And if you want telephony in real time, easy to operate, cheap and reliable, that technology is available already. It's called TE-LE-PHONE. And the beauty of all this is that all these technologies were designed specifically for that, they're not being adapted, modified or "prioritized" to deliver. They work just fine and have been doing so for many years. Since they were created they have been improved and they'll improve even more in the future. So why are we so eager to painfully transform something not fitted for a job into something able to do it. Even worse, do the maths for the final user. We'll be trading our one hundred television sets for one thousand dollar computers, our ten dollar radios too. What's the point? And don't get me wrong, I think is great to have some video, audio and telephony over the Internet. I'm happy to get so much from a network that wasn't originally designed for that. But if I want to see a movie I go to my TV set, if the movie I want is not on I go to the video club an rent it, and I can do it using the Internet which is cool. If I want to listen to the radio I turn on the radio, if I want to talk to someone I call him over the phone. And if I have the chance to talk with someone too far away using the Internet, great. It's cheaper than the phone too and it makes me so happy that I don't care if the sound is not crisp and crystal clear. It's more than enough to achive communication and that's more than I was expecting from the Internet. What about you?
At this point it seems pretty clear that I'm with the neutrality advocates, but I'm not. They want the government to regulate and insure neutrality and I don't want government regulation. The carriers own their networks and as owners they have the right to do with them whatever they want to. If they want to provide traffic prioritized by any rule they want, they're entitled to do so. It's their networks we are talking about. The rest of the world have the choice of buying service from them or not. It's that simple, any other point of view is an outrageous violation of property rights. We are used to it because our own rights are violated on a daily basis, but piling up another violation won't fix the problem. I think we have to let the carriers do what they want to do with their networks, we have to respect their rights.
There are also some technical and practical aspects that have to be taken into account. Neutrality advocates would say that my position of defending the rights of the carriers over all the rest will damage the Internet, and I agree in part. But they have to understand too that neutrality doesn't exist today and never really existed.
Every owner of a network have the ability to regulate the traffic inside it. I, for example, have full control of my network. My link with the Internet is totally under my control and I can decide how much bandwidth is available for each service or if a particular service is blocked. And I do it, for practical reasons. Services that are not authorized by the company policy are blocked, webmail pages that refresh too often are restricted in the amount of bandwidth they use, services to customers and contractors are prioritized. Your ISP probably is doing the same with a different criteria. Most likely it has a page, a main portal, with links to content, to your webmail, a search engine and advertising. They want you to use it because is the only way to make the advertising space valuable, so they privilege the traffic to and from that portal. It's not a big deal anyway, the portal is inside their network, transit time is practically null, so it will respond (it should) a lot faster that any other page from the outside. Add to that all the sites that are paying for hosting service to your ISP, they all are inside the same network and privileged by that condition over any other site from the outside. In a way, your ISP is breaking the concept of neutrality even if they don't explicitly prioritize the internal traffic. Now take the same case to a whole country. One with a decent backbone, meaning that all traffic from nodes inside the country is handled inside the country. Believe it or not most of the countries don't have such a backbone. Some countries with primitive communication infrastructure grew in satellite links, the lack of landlines made the satellites a more affordable alternative. Two ISPs there, located one next to the other, may be linked to different satellite services. Let's say a country in Africa with a link to a satellite over the Atlantic with land station in the USA and the other to a satellite on the west with land station in Israel. One packet sent across the street will tour around the world. Going back to the country with a decent backbone, all the sites inside that country will be more accesible than the foreign ones.
And that's just the technical problem related to the nature of the network, its structure. To that we have to add the difference in bandwidth and processing power between sites. Let's say that you try to set your own search engine in your computer using your 1 Mb Internet connection. You may have the best one, be better than Google, and yet fade and die strangled by your resource limitation. It would take you a million years to visit all the sites in the web, even more time to analyze and store the relevant information for the searchs, you wouldn't have enough space to store it no matter what kind of compression you use plus all the time and overhead processing required to do that. Add to that the main purpose of the site itself, serve customers with information. It's obvious that you won't be able to do it while gathering information but, even not doing it, your capacity would be limited to a few hundreds.
Neutrality is broken by the difference in resources between sites. Sites with more processing power, more bandwidth are able to serve more customers faster and with better services. And that's being paid by the sites, they pay the carriers and the ISPs for the privilege of more resources. The bigger the business is, the more need for resources it has, the more chances to grow, hence it will invest even more. Neutrality doesn't exist today, those who can pay more are doing it, they're getting more service for the money they are paying and using that to give more service to the final users.
Finally, how are they going to make prioritization to work? I don't want to go all the way back to the very basics of networking. Let's go back to the city analogy. Today the postmen do their rounds at their own pace picking up as many packets as they can and delivering evry time they pass the corresponding door or intersection. If their storage space is full, the packets that can't be picked up have to wait untill the next round, every door or intersection has a queue where the packets are stored for the postman in a certain order. That order is by default the time of arrival, the queue is serviced first in - first out. The methods used to prioritize traffic on a network are basically two. One would be an extra postman dedicated to priority traffic, most likely a faster and bigger one, able to do its round in less time and to carry more packets at once. To do that, the queues at every exchange point are doubled, one for each postmen. The other method is use the same postman but specially trained to be picky about the packets. This postman has to decide at each point which packets pick up first, he can't just take from the top of the pile. He has to go through the queue and pick the priority packets first and then the rest. Also, he can have a separated storage space that's reserved only for priority traffic. If that space is full, he can keep picking up priority traffic using the general storage but never use the reserved space for general traffic. This is way there's a minimum bandwidth allways available for priority traffic no matter how bad is the traffic condition.
It seems simple but is not. It works fine for a simple network but the Internet is not. As we saw before, Internet is a huge group of networks interconnected, every one with its own rules and management. As long as they agree in the protocol used to exchange packets (IP Internet Protocol) they can do whatever they want with their own internal network. I do set priority traffic inside my network, I have the means to move certain packets with a minimum of bandwidth guaranteed. But at the point where my ISP is picking up my packets it doesn't matter if I set many queues, my ISP is servicing me with only one postman. I can make an agreement with them to have an extra postman, but that would work up to the point where my ISP network has to exchange those packets with someone else. This kind of agreement with ISPs is very common like in my case. Let's say that I have a branch of my company in a place to far away to do my own network but with access to an access point of my own ISP. Being an extension of my own office I'd like to have that traffic prioritized over our traffic with the rest of the world. My ISP can do that inside its network just setting the configuration of its own postmen. Any other case involving a third network would require another agreement.
Suppose that for some reason you want to have priority traffic with certain site located at the other side of the world. You won't find a route from you to that site with less than three different owned networks, in fact you'll go through many more but for the sake of this specific problem we can assume that interconected networks of the same owner can handle priority traffic as if it were only one network. And I said three because is a theoretical minimum for almost every case around the Internet, your ISP, a carrier and the ISP of the destination site. Big sites are usually closer to the backbones in terms of hops (number of times a packet has to be relayed from network to network) because they're serviced by the carriers directly. These sites are the main target of this new idea because they're the ones who can afford to pay for priority and get some advantage from it. If one big carrier gives priority to site A, every ISP connected to that particular carrier would be receiving site A's traffic on top of their queues regardless of the policy they have in their own networks. Even other carriers around the world would get site A's traffic on top. But that's it, from there on, site A's is handled as any other traffic. As you can see, only one network giving prioority traffic is not a huge advantage.
If several carriers agree in giving priority to certain packets, the scenario changes just because of the extension of the service. More exchange points will see site A's traffic on top of their queues. The problem here is who's selling priority and how are they sharing the business. In my opinion, if it gets implemented sometime in the future, the system won't go much beyond the United States and its satellites. The number of big carriers in there is limited and, if they get gubernamental support, it's easy to reach an agreement. But once they have to deal with carriers outside of that circle things start to get more complicated. The big players of Internet service are in the USA mostly, Google, Yahoo, Microsoft. They're the ones who would pay for priority. The carriers outside the USA would find themselves giving a valuable service to those sites and nobody to bill for it. I don't think this would make the priority system fail, just keep it contained inside the USA. Because most of the final users that would be benefited (or punished) by priority are in the USA. Plus, the regulations of the USA government won't make much difference outside of it.
One last point to think about is how are the sites reacting to this. I can imagine some jumping into the priority wagon without even thinking. But is this such a good idea?
Let's take a lok at it from the final user perspective. Let's say that site A wants to improve its service trying to compete with site B. Site B is more popular, has a bigger share of the traffic, has been chosen by the final users by its content, its quality of service. Now with site A being prioritized, packets to and from it goes faster. Site B is still working fine but its packets enqueued behind site A's packets. How much difference would it make? If site B is so popular over A we can expect to have only a few A packets and a lot of B packets. In average the delay generated by those few packets will be hardly noticed. Priority of traffic won't make a quality difference between competing sites. Final users are choosing based on suitability of the service they get from one site or the other. Google is the most popular search engine not because is the faster, it's because people find stuff using it. Once you see it works, that you get what you were looking for, you go over and over to get what you need. If it fails you go somewhere else. Sites with other type of content work the same way, would you read a lousy writer just because its book is available faster? or you'll go to read what you want? do you pick a movie because is just about to start? or you wait for the one you want?
To make a real difference of service through priority traffic, two sites have to be of the same service, same popularity, same content, I'd say almost identical. So site A pays to get an edge over B, what if site B decides to sign in for priority too? And once one of them or both pay for priority, how are they going to measure that they're getting it?
Of course, if the priority system is established, sites like Microsoft's will sign for it. This is seen by most people as corporative stupidity but it isn't. If you're a small site, you have to evaluate the possible consequences of paying for priority before signing in. And you have to establish a way to measure the result. That's basic management. Microsoft and other big corporation, on the other hand, can waste huge amounts of money in order to stay on top. They won't risk the chance of falling behind, it's more affordable and eficient for them to pay before and analyze later. You can say whatever you want about that policy but the truth is that Microsoft has been the leader in the market of operating systems and productivity tools for decades. But for those who have to evaluate results and get a positive result, paying for priority will be dissapointing. At least that's my view.
As a conclusion, I don't aprove gubernamental intervention or regulation. If the carriers want to establish a priority system and charge for it, they're entitled to do so. If sites want to pay for priority they're entitled to do so. In my opinion, the system won't work because is not the solution for something that's not really a problem.
Subscribe to:
Posts (Atom)