- Ad-Aware Free Antivirus+
- Ad-Aware Personal Security
- Ad-Aware Pro Security
- Ad-Aware Total Security
- Ad-Aware Business Security
- PC Tuneup
- Data Security
- Trial Center
- Security Center
- English ▾
- Contact Us
ISP Level Malware Filtering
An Extended Clean Feed?
Lavasoft AB, Gothenburg Sweden
Should Internet Service Providers (ISPs) supply their customers with an Internet connection over a network feed that is clean from illegal Web content and malware - programs that could cause network lag, compromise system security and threaten user privacy? For example, a water company has to make sure that the water provided in their pipes is uncontaminated and flows securely all the way to their customers' water taps. Should that kind of extended 'clean feed' responsibility be laid on the shoulders of ISPs - and, would that even be possible? Some ISPs are currently filtering certain illegal or "inappropriate" Web content. If the ISPs are already performing partial filtering, why omit the filtering of malware?
This article's objective is to explore ISP level malware filtering in order to see if malware can be neutralized at an early, preemptive stage - before it contaminates local networks and systems - and to investigate if any such projects are planned or ongoing.
The Concept of Clean Feed
The concept of clean feed is based on the fact that Internet traffic is filtered with the help of a blacklist containing Internet addresses (URLs) of sites that are serving illegal Web content, such as content related to child pornography. A risk with this type of filtering approach is that whole domains could be blocked, rather than just the page serving the illegal content. This means that eventual false positives (blocking URLs serving legitimate content) could cause serious inconveniences for Internet users, especially if the filtering is done at the ISP level. In this perspective, the blocking of domains or IP addresses generates the same type of problems. In order to avoid such problems, the Internet traffic could be filtered dynamically, meaning that the traffic content is analyzed for certain words or images that are blocked if they match a certain signature that is stored in an image signature database.
The Swedish company, NetClean, has developed a clean feed solution that has been used for roughly two years by the Swedish ISP, TeliaSonera. NetClean's WhiteBox solution uses a URL block list containing the addresses of sites that are to be blocked; these are sites serving illegal content related to child pornography. The URLs are resolved to their IP addresses by NetClean's WhiteBox server and those addresses are thereafter propagated to the networks in order to be filtered via BGP (Border Gateway Protocol, the core protocol for routing on the Internet). The network traffic is then routed and tunneled to the WhiteBox server that checks URL requests against the ones listed in the URL blocking list. When a match is found, a specific block-page is returned; otherwise the request is processed in normal manner, allowing the page to be accessed.
According to NetClean, the WhiteBox solution is not causing any network performance degradation. Blocking of unique URLs, such as "www.domain.com/PageToBeBlocked", makes it possible to only block portions of websites. This supports the manual creation of blocking lists, such as blocking lists provided by the Australian Communications and Media Authority (ACMA) or the UK's Internet Watch Foundation (IWF ). NetClean was the first company to develop a technique for detecting child pornography related images based on signatures; illegal images are given a unique ID signature, or digital fingerprint, with the help of image analysis software. This technique has been implemented in NetClean's Proactive package that also has been adopted by TeliaSonera, among others. The NetClean ProActive for Internet Content Adaptation Protocol (ICAP) works by routing network traffic through a proxy server. All pictures are then scanned and compared to the signatures in an image signature database before the request is made. Illegal images are blocked and the incidents are reported.1
ISP Level Web Content Filtering
ISP level Web content filtering is already a reality in many countries, including Great Britain and Sweden.
In Australia, the Australian Communications and Media Authority (ACMA), recently ordered a second trial in order to evaluate ISP level content filters. The last similar trial was conducted in 2005. The fact that a "live pilot" of the Web content filtering solutions trial is ongoing makes the process even more interesting to follow; let's take a closer look at the report from the "Closed Environment Testing of ISP level Internet Content Filtering" trial that preceded the ongoing live pilot.2
The main objective of the Australian trials was and is to find out if ISP-based filters could be used to provide a clean feed to Australian households. This was planned as a broad spectrum solution affecting all households that explicitly did not ask their ISPs to be exempted. The trials are meant to clarify how the filtering affects network performance along with the obvious - if and to what extent the filters can identify and block illegal and "inappropriate" Web content. What is considered inappropriate is not clearly defined in the report. The Australian Family Association, however, states, "ome content found online may not be illegal, but it is still of serious concern to many families, e.g., sites promoting suicide, or self-starvation or other forms of self-harm."3
The ability to filter non-web traffic and the customizability of the filters are other factors that were and are investigated in the trials. ACMA uses its own blacklist for content that should be blocked. The ACMA blacklist consists of URLs associated to locations that serve images of sexually abused children and the blacklist is therefore considered, at this point, to merely be a child pornography blacklist. ACMA has also considered implementing more "sophisticated" filtering in order to provide extended web filtering services to Australian households that opt for it. Such "sophisticated" filtering could encompass automated content filtering, allowing for scanning and evaluation of text, images and video. This type of filtering is already used by Australia's New South Wales (NSW) public education sector, which filters Internet access for over a million computers across its networks.3
The Effects of Filtering on Performance and Efficiency
According to the published ACMA trial report, the filtered network suffered from a performance degradation ranging from two to 87 percent between the tested filtering solutions. As a comparison, the previous test (conducted in 2005) showed a performance degradation ranging from 75 to 98 percent. The decrease of network lag between the two tests indicates a great improvement, but it is important to keep in mind that the filtering caused some degree of network lag; an extremely low level of network lag is crucial in large networks. According to the ACMA trial report, a network performance degradation of 2 percent, represented by the best performing filtering solution, is considered to be a standard or acceptable level among ISP level Web content filtering products.
The effectiveness of filtering solutions was tested using three separate lists of URLs, containing a total of nearly 4,000 URLs. The efficiency of blocking inappropriate web content ranged between 88 and 97 percent and the level of overblocking (blocking of legitimate content) varied in the range of one to eight percent between the tested filtering solutions. Three of the tested filtering solutions managed to block more than 95 percent of the child pornography URLs on ACMA's blacklist, but none of the solutions offered 100 percent blockage. Even if three of the tested filtering solutions show extensive blocking capacity, the fact remains: some illegal content was not caught by the filters. Illegal content, such as child pornography, should not be able to pass efficient filtering solutions and the fault tolerance, in this case, should be zero.
The Filtering of Malware
There are many different Unified Threat Management (UTM) systems on the market. Network-based UTM appliances are often offered with bundles including Web content filtering, anti-spam, anti-virus, and network load balancing services for both small home or office networks and larger enterprise-level networks. This seems to also be the case with ISP level filtering products; the latest ACMA Web content filtering solutions trial report states that many of the filtering solutions represented in their test could be extended with anti-virus, anti-spam and anti-malware capabilities.
In the UTM appliance market, high customizability is considered important because "one-size-fits-all" solutions often fail to fully address the needs posed by highly diversified network environments. Vendors, such as Websense and BlueCoat Systems, provide high capacity standalone Web content filtering solutions that can be extended to also offer malware filtering. Such extended solutions usually depend on the usage of security gateways or proxy servers that are set to scan and filter the traffic between Internet and local networks. When looking at the NetClean example, the Swedish company that detects child pornography-related images based on signatures, they also rely on routing network traffic through a proxy server (which supports ICAP) where images can be matched against an image signature database. NetClean has developed a technological partnership with BlueCoat Systems, experts in high-end caching systems and secure proxy solutions. The NetClean ProActive for ICAP is verified to work with BlueCoat's Proxy SG appliances and with proxy servers such as SafeSquid, Squid, Mara Systems and Webwasher. NetClean states that they, in conjunction with BlueCoat Systems and their ProxySG appliances, can deliver "complete security solutions", including virus-scanning, even for large ISPs.4
So, if the technology exists - and apparently it does - why is it not implemented in large scale by ISPs in order to provide an extended clean feed, including malware filtering, to their customers?
Could it be the fact that such filtering solutions have not yet matured to a level where the network performance degradation, caused by extended traffic filtering, could be held down to an acceptable level? However, the latest ACMA Web content filtering solutions trial showed that the best performing filtering solution caused a 2 percent network performance degradation, which was regarded as acceptable. ACMA also seems open to extended filtering solutions for customers that opt for it.
The fact that some illegal Web content manages to slip through the filter, along with the fact that illegal and inappropriate Web content carried by other protocols than HTTP is not filtered, raises the question of the usability of the potential filtering solution. Also, the Australian government seems to focus on filtering what they regard as "inappropriate" content, even if some Australian ISPs, like Internode, would rather focus on malware filtering because such filtering would generate more value.5
The fact is that non-web traffic, in general, and peer-to-peer traffic, in particular, constitutes a great portion of the total Internet traffic. Efficient Web content filtering solutions should therefore also be able to filter and block content that is carried by non-web protocols, such as via Simple Mail Transfer Protocol (SMTP) or Real Time Streaming Protocol (RTSP). The latest ACMA trial showed that two Web content filtering solutions were able to block "inappropriate" content that was carried via SMTP and that only one solution could block "inappropriate" content in streaming media. In order to filter network data streams for malware in an efficient manner, several protocols - such as Hypertext Transfer Protocol (HTTP), Hypertext Transfer Protocol Secure (HTTPS), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), Post Office Protocol (POP), Internet Message Access Protocol (IMAP), and peer-to-peer protocols (P2P) - need to be filtered. In addition to this, Common Internet File System (CIFS), Secure Sockets Layer (SSL), MAPI, SOCKS, AOL IM, Yahoo IM, Microsoft IM, RTSP, QuickTime and TCP-Tunneling needs to be filtered when aiming for a complete content filtering solution.
ISP level malware filtering could be implemented by tunneling all network traffic through transparent Proxy servers where the traffic is filtered. Anti-virus or anti-spyware solutions based on ICAP could be used to scan both incoming and outgoing content in real time. Malicious content is blocked while legitimate content passes through unaltered. Passing files could be hashed - creating full or partial digital signatures of the files - and matched against the signatures stored in a malware signature database. Another approach would be to cache files in order to subject them to a heuristic scan, performed later within the file cache. If a file within the file cache is found to be malicious by the heuristic scan, it's signature is inserted in the malware signature database so that it may be blocked in the future.
Malware URLs could be saved in a database for blocking or research purposes. Spyware creators often recompile their spyware code in order to avoid detection by malware scanners that use signature-based (such as the md5 value of files) scanning. The recompilation can be done in an automated manner creating large numbers of unique binaries; we often see this type of behavior among certain rogue software. Using URL filtering can be a usable alternative in such cases, but block-lists must be updated continuously and the websites or IPs listed have to be checked and rated continuously in order to keep the block-lists accurate.
The aim of this article was to clarify the eventual possibilities for ISP level malware filtering and to illuminate if such solutions are implemented or planned.
Clean feed Web content filtering solutions are implemented in certain countries, like Sweden, the UK, and Australia. In these cases, the clean feed is focused on filtering Web content related to child pornography. ISPs are thus filtering the network feed, but only to a certain extent; they omit the filtering of viruses and malware. Yet, the technology for such filtering is available. ACMA states in their Web content filtering solutions trial that many of the tested solutions could be extended to also provide anti-virus and anti-malware protection. ICAP compatible anti-virus or anti-malware scanners, installed on transparent proxies, could be used for real-time scanning of tunneled network traffic. What is the reasoning behind only offering clean feed in its current extent? Spyware, malware, worms and viruses pose a serious threat to both system integrity and user privacy. The prevalence of such malicious programs could also threaten the stability of critical systems and networks. Some ISPs, such Australia's Internode, would like to focus on malware filtering rather than performing questionable filtering on "inappropriate" Web content - filtering that could be argued to represent a form of Internet censorship.
At the same time, most ISPs acknowledge that it is important to protect systems against the pests that are present in their network feed, but the protective means are to be taken by individuals through the use of proper anti-virus and anti-spyware software. Many ISPs worldwide sell separate anti-virus and anti-spyware software bundles to their customers as optional extras, instead of providing a malware-free network feed. Providing malware filtering as an extension of the existing clean feed could prove to be a competitive advantage for ISPs that offer such solutions to their customers.
In the publication "Making the Internet Safe", the Australian Family Association states, "In contrast the community wants primary filtering to be done at the ISP level."6 If that statement is true, it raises an important final question: where does the responsibility of the ISP start and where does it end?
1 http://www.netclean.com/EN/documents/NetClean_for_ICAP_EN.pdf. NetClean Proactive for ICAP. Retrieved on 2009-01-21.
2 http://www.acma.gov.au/webwr/_assets/main/lib310554/ISP level_internet_content_filtering_trial-report.pdf. Closed Environment Testing of ISP level Internet Content Filtering. Retrieved on 2009-01-21.
3 http://www.family.org.au/InternetSafeWeb.pdf. Making the Internet safe. Retrieved on 2009-01-21.
4 http://www.netclean.com/EN/partners_tech_en.asp. NetClean Technology Partners. Retrieved on 2009-01-21.
5 http://cyberlaw.org.uk/2008/06/19/australian-isps-govt-porn-filters-coul.... Australian ISPs: Govt porn filters 'could cripple internet'. Retrieved on 2009-01-21.
6 http://www.family.org.au/InternetSafeWeb.pdf. Making The Internet Safe. Retrieved on 2009-01-21.
Share this post: Twitter Facebook