Hi. I was at TEDxChristchurch today. If you couldn’t make it, The Press was live streaming the day on their website, and videos will be up on TEDxChristchurch’s website soon. Coming to TEDx each year is like watching a child grow up because the quality of the event gets better every year – like design of the slides introducing speakers, audience participation methods, and the name tag/programme.
Here’s why you need to watch the videos of the talks when they go online… (And also because I’ve missed bits, I’ve misinterpreted and I’ve probably misquoted a little.)
Because it’s a great excuse for an internet censorship machine.
This isn’t a debate about whether child sex abuse is right or wrong. You know it’s wrong, I know it’s wrong, we all know it’s wrong. This is a debate about censorship.
New Zealand has an internet blacklist. A list of content that, if your internet service provider has decided to be part of the filtering project, you can’t access. Images of child sexual abuse are meant to be the only stuff blocked, but the list is secret, censorship decisions happen in private and if international experience is anything to go by, other content has a habit of turning up blacklisted.
“The filtering system is also a tool to raise the public’s awareness of this type of offending and the harm caused to victims. The Group agreed that this particular aspect of the filter needs to be more clearly conveyed to the public.”
So basically, it’s to make it seem like they’re doing something, because it doesn’t actually prevent people from accessing child sex abuse images.
A list of objectionable sites is maintained by the Department. If someone using an ISP that’s participating in the filter tries to access an IP address on the filter list, they’ll be directed to the Department’s system. The full URL will then be checked against the filtering list. If the URL has been filtered, users end up at this page. The user can appeal for the site to be unfiltered, but no appeals have been successful yet (and some of the things people have typed into the appeal form are actually quite disturbing).
Is my internet being filtered?
The internet of 2.2 million ISP clients is being filtered.
I assume, for the ISPs providing a mobile data service, the filter is being applied there too.
Why the filter is stupid
Child pornography is not something someone stumbles upon on the internet. Ask anyone who has used the internet whether they have innocently stumbled upon it. They won’t have.
It’s easy to get around. The filter doesn’t target protocols other than HTTP. Email, P2P, newsgroups, FTP, IRC, instant messaging and basic HTTPS encryption all go straight past the filter, regardless of content. Here’s NetClean’s brochure on WhiteBox (pdf), and another (pdf). Slightly more technical, but still basic tools like TOR also punch holes in the filter. The filter is not stopping anyone who actually wants to view this kind of material.
A much more effective use of time and money is to try to get the sites removed from the internet, or you know, track down the people sharing the material. Attempts to remove child sex abuse material from web hosts will be supported by a large majority of hosts and overseas law enforcement offices.
It is clear that the DIA don’t do this regularly. They’re more concerned with creating a list of URLs.
“Additionally 18% of the users originated from search engines such as google images.”
Google would take down child sex abuse images from search results extremely fast if they were made aware of them. And it is actually extremely irresponsible for the DIA not to report those images to Google.
Update: The DIA say they used Google Images as an example, and that they do let Google know about content they are linking to.
“The CleanFeed [the DIA uses NetClean, not Cleanfeed] design is intended to be extremely precise in what it blocks, but to keep costs under control this has been achieved by treating some traffic specially. This special treatment can be detected by end users and this means that the system can be used as an oracle to efficiently locate illegal websites. This runs counter to its high level policy objectives.” Richard Clayton, Failures in a Hybrid Content Blocking System (pdf).
It might be possible to use the filter to determine a list of blocked sites, thus making the filter a directory or oracle for child sex content (however, it’s unlikely people interested in this sort of content actually need a list). Theoretically one could scan IP addresses of a web hosting service with a reputation for hosting illegal material (the IWF have said that 25% of all websites on their list are located in Russia, so a Russian web host could be a good try). Responses from that scan could give up IP addresses being intercepted by the filter. Using a reverse lookup directory, domain names could be discovered that are being directed through the filter. However, a domain doesn’t have to contain only offending content to be sent through the DIA’s system. Work may be needed to drill down to the actual offending content on the site. But this would substantially reduce the effort of locating offending content.
Child sex abuse sites could identify DIA access to sites and provide innocuous images to the DIA and child sex abuse images to everyone else. It is possible that this approach is already happening overseas. The Internet Watch Foundation who run the UK’s list say in their 2010 annual report that “88.7% of all reports allegedly concerned child sexual abuse content and 34.4% were confirmed as such by our analysts”.
Someone could just use an ISP not participating in the filter. However people searching for this content likely know they can be traced and will likely be using proxies etc. anyway. Using proxies means they could access filtered sites through an ISP participating in the filter as well.
It is hard (practically, and mentally) for three people to keep on top of child sex abuse sites that, one would assume, change locations at a frequent pace, while, apparently, reviewing every site on the list monthly.
“The system also will not remove illegal content from its location on the Internet, nor prosecute the creators or intentional consumers of this material.” and that
“The risk of inadvertent exposure to child sexual abuse images is low.”
The Code of Practice says:
“6.1 During the course of the filtering process the filtering system will log data related to the website requested, the identity of the ISP that the request was directed from, and the requester’s IP address. 6.2 The system will anonymise the IP address of each person requesting a website on the filtering list and no information enabling the identification of an individual will be stored.”
“6.5 Data shall not be used in support of any investigation or enforcement activity undertaken by the Department.” and that
“5.4 The process for the submission of an appeal shall: • be expressed and presented in clear and conspicuous manner; • ensure the privacy of the requester is maintained by allowing an appeal to be lodged anonymously.”
Anonymity seems to be a pretty key message throughout the Code of Practice.
“When a request to access a website on the filtering list is blocked the system retains the IP address of the computer from which the request originated. This information is retained for up to 30 days for system maintenance releases and then deleted.” [emphasis mine]
Update: The DIA says that the IP address is changed to 0.0.0.0 by the system.
The site that people are directed to when they try to access a URL on the blacklist (http://dce.net.nz) is using Google Analytics. The DIA talk the talk about the privacy and anonymity around the filter, but they don’t walk the walk by sending information about New Zealand internet users to Google in the United States. It’s possible this is how the DIA gets the data on device type etc. that they use in their reports. Because anyone can simply visit the site (like me, just now) those statistics wouldn’t be accurate.
“Andrew Bowater asked whether the Censorship Compliance Unit can identify whether a person who is being prosecuted has been blocked by the filtering system. Using the hash value of the filtering system’s blocking page, Inspectors of Publications now check seized computers to see if it has been blocked by the filtering system. The Department has yet to come across an offender that has been blocked by the filter.”
I’m not exactly sure what they mean by hash value, but this would seem to violate the “no information enabling the identification of an individual will be stored” principle.
Update: They are searching for the fingerprint of content displayed by the blocking page. It doesn’t seem like they could match up specific URL requests, just that the computer had visited the blocking page.
Where did these 6500 URLs disappear to (or more accurately, why did they disappear?). What was being erroneously blocked during the trial period, or was 7000 just a nice number to throw around to exaggerate the likelihood of coming across child sex abuse images (though, even with 7k sites, the likelihood still would have been tiny)?
Firstly, we weren’t going to have a filter at all:
“The technology for internet filtering causes delays for all internet users. And unfortunately those who are determined to get around any filter will find a way to do so. Our view is that educating kids and parents about being safe on the internet is the best way of tackling the problem.”’
“Aware that the inclusion of drawings or computer generated images of child sexual abuse may be considered controversial, officials advised that there are 30 such websites on the filtering list [that number is now higher, 82 as of December 2011]. Nic McCully advised that officials had submitted computer generated images for classification and she considered that only objectionable images were being filtered.”
The arguments around re-victimization kind of fall apart when you’re talking about a drawing.
“The Group was asked to look at a child model website in Russia. The young girl featured on the site appears in a series of 43 photo galleries that can be viewed for free. Apparently the series started when the girl was approximately 9 years old, with the latest photographs showing her at about 12 years old. The members’ part of the site contains more explicit photos and the ability to make specific requests. While the front page of the website is not objectionable, the Group agreed that the whole purpose of the site is to exploit a child and the site can be added to the filter list.”
Clearly illegal, objectionable images of child sexual abuse? No, but we think it should be filtered so we went and did that.
The DIA was secretive about the filter being introduced in the first place. Their first press release about it was two years after a trial of the system started. I wonder how many of those customers using an ISP participating in the trial knew their internet was being filtered during that time?
The Independent Reference Group is more interesting than independent. Steve O’Brien is a member of the group. He’s the manager of the Censorship Compliance Unit. To illustrate this huge conflict of interest, he is the one who replies to Official Information Act requests about the filter. Because the Censorship Compliance Unit operate it.
“The Group was advised that the issue of Steve O’Brien’s membership had been raised in correspondence with the Minister and the Department. Steve O’Brien offered to step down if that was the wish of the Group and offered to leave the room to allow a discussion of the matter. The Group agreed that Steve O’Brien’s continued membership makes sense.” [emphasis mine]
That was the only explanation given. That it makes sense that he is a member. Of the group that is meant to be independent.
Additionally, the DIA seems to have accidentally deleted some reports that they should have been keeping.
“Last year we used the Official Information Act to ask for copies of the reports that the inspectors [have] used to justify banning the websites on the list. The DIA refused. After we appealed this refusal to the Ombudsman, the DIA then said that those records had been deleted and therefore it was impossible for them to give them to us anyway. The Department has an obligation under the Public Records Act to keep such information.
We complained to the Chief Archivist, who investigated and confirmed that the DIA had deleted public records without permission. He told us that the DIA has promised to do better in the future, but naturally this didn’t help us access the missing records.”
The Code of Practice says:
“4.3 The list will be reviewed monthly, to ensure that it is up to date and that the possibility of false positives is removed. Inspectors of Publications will examine each site to ensure that it continues to meet the criteria for inclusion on the filtering list.”
It’s unlikely this actually happens.
Here’s some statistics of how many URLs have been removed.
December 2011 267 removed
August 2011 0 removed
April 2011 108 removed
It’s impossible that between April and August there were no URLs to remove.
“The list has been completely reviewed and sites that are no longer accessible or applicable (due to the removal of Child Exploitation Material) have been removed.”
The Independent Reference Group has the power to review sites themselves. But in at least one case, they chose not to:
“Members of the Group were invited to identify any website that they wish to review. They declined to do so at this stage.”
The filter isn’t covered by existing law and didn’t pass through Parliament. Appropriate checks and balances have not taken place. The DIA did this on their own.
By law, the Classification Office has to publish its decisions, which they do. The DIA’s filter isn’t covered under any law, and they refuse to release their list. The DIA say that people could use the list to commit crimes, but the people looking for this material will have already found it.
What if the purpose of the filter changes? The DIA introduced it without a law change, the DIA can change it without a law change. What if they say “if ISPs don’t like it, they can opt out of the filter”? How many ISPs will quit?
The only positive is that the filter is opt in for ISPs. Please support the ISPs that aren’t using the filter. Support them when they’re accused of condoning child pornography, and support them when someone in government decides that the filter should be compulsory for all ISPs.
Side note: why does all of the software on the DIA’s family protection list, bar one, cost money? There is some excellent, or arguably better, free software available. There’s even a free version of SiteAdvisor, but the DIA link to the paid one. Keep in mind that spying on your kids is creepy. Talk to them, don’t spy. The video for Norton Online Family hilariously and ironically goes from saying “This collaborative approach makes more sense than simply spying on your child’s internet habits [sitting down and talking — which is absolutely correct]” to talking about tracking web sites visited, search history, social networking profiles, chat conversations and then how they can email you all about them. Seriously. Stay away.
Look what I found at the end of the Hoyts ticket counter:
It contains some interesting content.
“Remove unauthorised material from your computers”
“While not required under the new law, illegally obtained copyright protected material may still be file shared and therefore should be removed.”
Read: buy the files you downloaded illegally in the past. Helpful advice would be to remove peer-to-peer software from your computer if you’re not using it, or to stop sharing illegally obtained material if you’re doing so (eg. stop seeding).
“What are the risks of P2P file sharing?”
“P2P file sharing can expose your computer to harmful viruses, worms and trojan horses as well as annoying pop-up advertisements. There is also a real danger that private information on your computer may be accessible to others on P2P networks.”
Finding files through moderated sites (which can remove harmful torrents), reading the comments on torrents and having up-to-date anti-malware software all reduce this small risk of harm.
The “real danger” of private information being inadvertently shared is practically impossible with torrenting. LimeWire, FrostWire and friends were possibly deceptive about what user’s folders were actually being shared in the past, but now LimeWire is dead and FrostWire exclusively uses torrents, so it shouldn’t be a problem anymore.
But points for including the relatively unbiased URL of NetSafe’s The Copyright Law, albeit in tiny print down the very bottom on the back page.
This site is interesting, especially when you compare its list of legitimate places to buy movies and TV shows to the US version‘s list.
Our list for TV shows is basically the On Demand sites for the free-to-air TV stations, plus iSky. On the movies side we have iSky, the console networks and iTunes, which is also listed as having TV shows, but that’s not the case in New Zealand.
In comparison, the US site lists 43 legal alternatives, including iTunes (which you can actually get TV shows from in the US, or by using a US iTunes account), Hulu and Netflix.
And the MPAA wonder why people illegally download movies and TV shows in New Zealand?
Good news on the music front though. Music streaming subscription service Spotify is coming to Australia and New Zealand, possibly around February next year. The downside is that they’re now in bed with Facebook, so you’ll need a Facebook account to use it.
Jonathan Hunt and Lance Wiggs illustrate how inadequate the sites MPAA lists are. MPAA, NZFACT and friends love harping on how people pirating movies like Boy are harming our movie industry in New Zealand.
But you still can’t download it legally from iTunes.
In 2009 the New Zealand Blood Service (NZBS) changed their deferral criteria for donating blood based on a 2008 review. The men who have sex with men (MSM) ban was reduced from 10 years to five years—“You must not give blood for: five years following oral or anal sex with or without a condom with another man (if you are male)”. There will be another review of the criteria in 2013.
A one year deferral is in place for a woman who has had sex with a MSM, and for those who have had sex with a person who carries the hepatitis B or C viruses, or an injecting drug user, a sex worker, a person with haemophilia or related condition, or with a person who has lived in or comes from a country with high HIV prevalence. People who have worked as sex workers only in New Zealand can’t give blood for a year.
People who have worked as sex workers outside of New Zealand or who have lived in a country with a high rate of HIV (including sub Saharan Africa and parts of Asia) can’t give blood for five years.
People who have injected/snorted non-prescription illegal drugs or who have lived in the UK, France or the Republic of Ireland for a total of six months or more between 1980 and 1996, because of possible exposure to Creutzfeld-Jakob disease, are permanently deferred from giving blood.
New Zealand sex workers aren’t considered to be a high HIV risk because: “there have been only 20 women diagnosed with HIV who were known to be sex workers and three to four men who were reported to be infected by a sex worker in New Zealand.”
MSM bans around the world
New Zealand isn’t as strict as other countries. Hong Kong, Singapore, Austria, Belgium, Denmark, Finland, France, Germany, Ireland, Netherlands, Norway, Portugal, Sweden and the UK have a lifetime ban on MSM donating blood. The US, Canada and Switzerland effectively do too, banning any men who have had sex with men after 1977.
Australia and Japan have a one year ban, South Africa has a six month ban, and Spain and Italy ban on behavior rather than the sex of sexual partners. Spain has a 12 month exclusion for anyone who has had more than one sexual partner in the last 12 months. The interpretation of Italy’s exclusion based on risky behavior is unclear and inconsistently applied—some centers still exclude MSM.
“Once a potential donor presents there is a three tier combination approach to safety: a questionnaire on behaviour followed by an interview, tests that are highly sensitive and specific are carried out on the donated blood, and (for manufactured plasma products) the use of physical and/or chemical methods to inactivate infectious agents.”
The HIV concerns that remain even though donated blood is tested relates to the early period following infection where the infection doesn’t show up on tests and relates to the risk that established infections aren’t picked up by testing or that infected blood is identified but fails to be removed from the system. The early “window period” for HIV averages to be about 12 days using Nucleic Acid Testing, which the NZBS tests with. A short deferral period of a year would eliminate the risk of window period infections. Longer deferral periods reduce the risk established infections present.
It’s thought that people with a higher risk of having HIV would also have a higher risk of having an “unknown or untested for infectious [agent]”.
The risk of the test system failing to detect an infection where “the marker is present” is very low because of the features of modern testing equipment used and because NZBS tests for each major virus twice. However “the test system may be unable to detect a rare form of the virus”.
“No transmissions have been documented in New Zealand since routine testing was introduced for these viruses… however… the low levels of risk are achieved by a combination of measures and are not solely due to the effect of blood donation testing.”
Australia’s one year deferral
About a decade ago, Australia dropped to a 12 month deferral for donors who have had male-to-male sex.
“Surprisingly in Australia, with a one-year deferral for MSM, though MSM are still over represented, the prevalence of HIV is only 4 per million donations, less than in New Zealand (11 per million donations). This suggests that there is either greater adherence to deferral criteria in Australia, or a higher rate of clinical HIV testing and therefore fewer undiagnosed infections, or the figures from Australia are incomplete.”
A study in Australia found there was no evidence of a significantly increased risk of transfusion-transmitted HIV subsequent to implementing the one year deferral period for MSM. In the one year deferral data the five MSM with HIV infections would have been excluded had they been honest and provided a complete history.
“We found no evidence that the implementation of the 12-month deferral for male-to-male sex resulted in an increased recipient risk for HIV in Australia. The risk of noncompliance to the revised deferral rather than its duration appears to be the most important modifier of overall risk.”
Donating blood is a valued social activity and the restriction based on sexual partners is indirectly homophobic which creates social exclusion and adds to stigma on the basis of male-to-male sex. In the US there is a group who have a “HIV prevalence 17 times that of their comparator: black versus white women”. There’s no call for a ban on that group from donating blood. Are we more sensitized to racism than homophobia?
“It does not distinguish between sexual acts… or whether a man has been in a monogamous relationship, but stigmatises any male same sex contact.”
But would a one year ban, like Australia’s, be any less discriminatory? There is an ethical requirement to protect the recipients of blood because they’ve been thrown into their situation. For indirect discrimination to be truly removed, there would have to be no ban on MSM. That’s unlikely until medical advances make it safe for the recipients of donated blood.