ICANN and internationalization! - Part 1
The Boring Background Stuff
The Internet Corporation for Assigned Names and Numbers (ICANN) is a nonprofit organization that incorporated in Los Angeles in September 1998. Under contract from the US Department of Commerce, ICANN runs a couple databases that maintain the Internet's consistancy. They hold the IP address blocks for IPv4 and IPv6, the assignment of those blocks to various Internet registries, Internet protocol identifiers, generic top level domains (.com, .net, etc), country specific top level domains, autonomous system allocation, time zone database and the list of root name servers. Prior to ICANN, such duties were handled by IANA (now part of ICANN) beginning in 1988. It did all of the same tasks on contract from the US Department of Commerce.
An important aspect to remember is that ICANN serves primarily as a database clearinghouse. Most actual internet technologies and standards are adapted by the Internet Engineering Task Force (IETF). Originally, a group of US government funded researchers. Now, it is completely decentralized, informal and doesn't even officially have membership. The Internet Society (ISOC) is an international organization that provides a more formal framework for the IETF, and has its headquarters in US and Switzerland. Its function is to support the IETF logicistcally, provide education, run the .ORG top level domain. Aside from 'owning' the .ORG registry, it's entirely a support organization, rather than existing as an entity providing technical or organizational direction to the Internet as a whole. IETF and ISOC are not under contract with the US government, and are already highly internationalized.
DNS itself started as decentralized text files stored on any machine connected to ARPAnet. Stanford Research Institute collected the information, and transmitted it to any involved or interested parties. Paul Mockapetris designed and wrote the first implementation of DNS in 1978. From 1978 to 1991, it was handled by Stanford Research Institute. From 1991 until 1998, it was run by Network Solutions (an American company). From 1998 until present, it has been run by ICANN.
Historical ICANN Concerns
Since the inception of the domain name system, the US government has asserted ownership and oversight. Until 1991, DNS root name servers were maintained by SRI because no one else wanted to do the work. In 1991, the US government asserted more direct control when they passed responsibility to Network Solutions. Network Solutions did an acceptable job during its tenture, but as the importance grew, there was a strong desire for more competition to selling domain names. Being a for-profit company, owning the root name servers was seen at the time as a very strong commercial advantage. ICANN does not engage in commercial activities and is solely focused on management of their databases.
From its inception, ICANN has handled its responsibilities without technical problems. There have been a number of political issues. There have been issues with VeriSign (US corporation, operator of .COM and .NET top level domains). While VeriSign has handled its technical responsibilities near perfectly, it has overreached its delegated authority several times.
What is DNS?
DNS converts people friendly names such as google.com into machine friendly Internet Protocol numbers, and vice versa.
Sounds simple, and it is. Secondary usage is identifying mail servers and various other small bits of text that's handy for running a domain attached to the internet.
DNS's Shortcomings
DNS was basically designed for systems that more or less trusted each other. Security wasn't even an afterthought. It was designed to scale a bit, and to replace manually maintained text files. It's not all bleak, DNS servers rarely now allow zone transfers so one (usually) can't get a complete map of network or a domain's subdomains.
However, there is a significant problem with cache poisoning. A bad guy pretends to be another DNS server, and gives bad information. DNS doesn't have any authentication or validation mechanisms, so there's no way of knowing the data is bad. The bad guy can either sit in the middle passing information to the intended server (say, harvesting credit cards or passwords), or directly serve content (say malware or malicious code). Long story short, DNS is not designed to resist cache poisoning. Biggest issue is transaction ID bit length, but other issues persist. This is a known issue and while mitigated, cannot be fixed.
Another issue is DNS hijacking, setting up a rogue DNS that intentionally gives false information. Again, DNS has no built in authentication. Many service providers regularly hijack (usually) mistyped or unknown requests. You type in a domain that doesn't exist, your ISP gives you an ad-laden error page instead of just an error message. Bad guys use this for cross-site scripting attacks, which is (vastly oversimplified) a section of a web page abusing permissions for the whole page.
DNS clients can't verify the data they receive from DNS servers. However, DNS servers don't verify their clients either. If an IP asks for DNS information, it sends the data. It doesn't verify that IP is correct, it just sends the data. By spoofing the IP on the DNS request, an attacker can send a 64 byte request to a DNS server, and that DNS server will send a lot more data back to the fake IP address. A gigabyte of requests could end up with 50 gigabytes of answers hitting the victim. This is called "DNS amplification attack".
DNSSec - Eh, nice try...
Domain Name System Security Extensions (DNSSEC) was designed to add authentication to DNS. All DNSSec zones would give their DNS answers back cryptologically signed, proving the data was not altered in transit. The DNS data is not encrypted, however. DNSSec doesn't protect against Denial of Service attacks.
One huge vulnerability that DNSSec introduced is zone enumeration. Nearly all DNS servers will refuse requests for all the information that a DNS server has. This prevents bad guys from trivially mapping out a network or domain. DNSSec requires zone enumeration. The difference is the same as someone looking up someone's phone number, individual by individual, by direct name and being handed a phone book.
Another issue is that DNSSec relies on centralized chain of trust. Each domain generates their own certificate, hands part of it over to the folks they buy their domain name from, and the domain registrar gives that information to the zone operator designated by ICANN. If ICANN, the zone operator and/or the domain registrar are untrustworthy, DNSSec does not provide authentication.
DNSv2 - It'll never happen
DNS should be overhauled from scratch, but be backwards compadible. It should be written to protect against false data, man in the middle attacks, denial of service attacks, cache poisoning, etc. It should not allow zone enumeration. The data should be able to be encrypted so it can't be read by unintended parties, as well as cryptologically signed so the contents are known to be accurate. It would also be designed for scalability.
Clients should be minimally verified by a handshake to prove that they're actually requesting the data.
It should also be suspicious. DNS data trustworthiness shouldn't be a choice between "none" and "absolute". It should allow DNS servers and clients to partially trust sources. If a 'trusted' certificate authority is compromised, it's trustability should be able to be lowered but not eliminated to allow a continuity of service while not completely compromising security.
This is unlikely to happen because of the sheer magnitude of existing equipment. Phasing in a new version of DNS would realistically take decades, even if it was specifically designed for ease of transitioning or adaption.
Why does DNS matter?
Because the overwhelming majority of eCommerce on the internet relies on it.
eCommerce is protected by SSL. SSL (Secure Sockets Layer) was actually replaced by Transport Layer Security, but the acronym is often used interchangably. An SSL certificate is two parts. One is public, and given to users when they browse a web page. The other is on the server, and not distributed. This certificate is additionally asigned by a Certificate Authority (CA). The CA does some minimal checking to ensure the requester is who they say and add a bit of their own code testifying that said certificate is legitimate. This certificate is tied to a domain name.
So SSL relies on a certificate and a domain name. Compromise the certificate, you can read the data but not inject trusted data into the stream. If you compromise a computer's DNS system, you can point the requests for the domain name to where ever you wish. Again, this would allow the bad guy to either passively listen in the middle, or allow the bad guy to point it to his own server. If a bad guy points you to his server, he can also present a forged certificate, potentially bypassing that protection completely.