An interesting site popped up near the end of last year called YouHaveDownloaded.com. You might not have visited it, or even heard of it, but if you’ve been using torrents, it might have heard of you.
The site is quite simple, it tracks torrents and the people (IP addresses) downloading them, much like copyright holders do (or hire companies to do for them). They claim to be tracking roughly 4%-6% of all torrent downloads and 20% of torrents from public trackers, like The Pirate Bay.
The difference to the copyright holders is that this site makes the information is collects public. You can see what it thinks the IP address you’re using has been used to torrent, or any other IP address you can think of. It might not be right, or it might be spot on.
This site just highlights what is going on all the time. Torrenting is a very public activity unless you’re making an effort to protect your privacy (like using a proxy or VPN from a reputable provider). Privacy is not the default on the interwebs.
IP addresses are more like PO Boxes than physical addresses — most people have dynamic IP addresses that regularly change, and add in the fact that some people have insecure Wi-Fi, the results on the site aren’t that accurate.
The site brings up an interesting statistic, especially if it’s true: “About 10% of all online shoppers, in the US, are torrent users as well.” In the future will advertisers link an IPs torrenting history to an advertising profile. Is this already happening?
The removal form
The site provides a form that supposedly enables people to request removal from the site. Don’t use it.
Previously it asked people to sign in using their Facebook accounts, and the CAPTCHA to get to the non-Facebook removal form didn’t work (ie. they wanted to link your data with a real name, cue warning bells). Now it seems like Facebook has revoked their access to use Facebook logins (they say Facebook logins are “Temporarily disabled due problems with Facebook”), so it brings up the removal form, which asks for a name and an email address.
I’m not saying this is what the people behind the site are doing, but this would be all the information they would need, in addition to the information they have on torrents associated with your IP address, to send an extortionate email your way. Or sell your data (probably not to copyright holders, because they hire people to do this for them already).
Here’s what their removal terms are (and yeah, the rest of the site is worded like this too):
“Removal Terms The Details: By submitting a request to have your download activity removed from our database, you are acknowledging that the activity was, in fact, carried out by yourself. This means that you are only submitting a request to have the details of your own personal activity deleted. Any unrecognized activity, such as files you did not download or do not remember downloading, are not — I repeat, are not to be included in your removal request. Why is this imperative? Well, we actually don’t have to explain ourselves…sorry.
The important part is that you understand these terms and conditions before hitting that beautiful button that will erase your criminal back ground, at least for now. Wait, you did remember to read these terms before making the decision to submit a removal request, right? Of course you did, everyone reads the fine print.
Other Important Things to Consider: We make no guarantees that your information will not appear on any other databases. We may have erased your bad behavior but, keep in mind that your data on this site is aggregated public domain. So, if by chance, another sadistic group of people decides to open a similar web site, we have no control over what they do with your information. Furthermore, if you continue to involve yourself in activity like this, your future download history will, without a doubt, appear in our database again and we may not be as nice about it next time.
If any part of these terms is still unclear, please visit your local elementary school and ask to repeat grades 3 through 5.”
Giving the people or company behind the site any more information about yourself is not a good idea, even if they claim that the site is a joke and you shouldn’t take it seriously.
And anyway, if your IP address is listed on the site, it must be because of the person that used it previously. Right?
Perhaps they should have also asked how many people would just change how they download files illegally?
The WAND Network Research Group at The University of Waikato has been measuring how traffic flows through a New Zealand ISP. They can split traffic into types with a pretty high degree of accuracy without having to “look inside” too much. Donald Clark compares it to looking at the postmark of a package and giving it a squeeze and being able to tell, in general terms, what’s inside, without having to open it.
The resulting data is a valuable insight into how residential DSL customers at this particular ISP reacted to the new law.
More graphical goodness can be found in the slides from a NZNOG presentation here.
There was about a 75% decrease in BitTorrent traffic straight after the law was introduced, largely sustained into 2012, with huge increases in remote and tunneling traffic. The law isn’t stopping file sharing, just moving it underground, using VPNs, seedboxes and sites like now closed Megaupload.
There was also a big decrease in newgroup traffic, even though it doesn’t appear to be targeted by the new law.
“P2P, P2P structure, Unknown, Newsgroups and Encrypted [not all shown in the graph above] have all decreased massively from their January 2011 levels. Interestingly, each of these categories can be tied to the illegal downloading activities targeted by the CAA [Copyright Amendment Act]. P2P and P2P structure are obviously related, Newsgroups are a common source of torrent files and the Unknown and Encrypted categories were strongly suspected of containing a significant quantity of encrypted P2P traffic.
Even more interestingly, Remote, Tunneling and Files experienced similarly large growths in the amount of traffic downloaded by DSL users. This is probably indicative of people changing their approach to downloading copyrighted material. Instead of participating in file sharing on their home machines, it has become more common for people to use machines based in other countries and ship the file back home via another protocol. This might be via SSH, VPN or FTP, for example, which are all covered by the growing categories.
Similar trends are observed when looking at traffic transmitted by the DSL users. Categories associated with P2P file sharing have seen much less traffic compared with January 2011, whereas Tunneling, Remote and Files have soared.
It should be noted that although Tunneling has grown significantly, the overall amount of Tunneling traffic is still much less than the total amount of P2P traffic. But the sudden changes in application protocol usage are still very noteworthy and suggest that the CAA has had a major impact on people’s Internet usage.”
We’re six months into the Copyright (Infringing File Sharing) Amendment Act, the law that pleased no one (the copyright lobby thought copyright holders should only pay the price of sending a letter, everyone that uses the internet thought the law was stupid), but was passed anyway.
Tech Liberty asks if some infringement notices being sent to customers are invalid because they don’t contain the required information under the law.
An Orcon customer posted on the 3StrikesNZ forum about two notices (s)he received and posted the screenshots of the emails (click for larger versions). Note that both notices are for the same song. Anonnz says that the offending file, torrent, and software was removed after the first notice and so a second warning notice should never have been sent.
4(2)c(iii) states notices must describe the type of work in terms of section 14(1) of the Copyright Act.
4(2)c(iv) states notices must describe the restricted act or acts in terms of section 16(1) of the Copyright Act by which copyright in the work is alleged to have been infringed.
4(2)c(v) states notices must give the New Zealand date and time when the alleged infringement occurred or commenced, which must specify the hour, minute, and second. The first notice doesn’t specify the time to the second.
4(2)c(vi) states notices must identify the file sharing application or network used in the alleged infringement.
5(2)b states notice numbers must identify whether the notice is a detection notice, a warning notice, or an enforcement notice; and (c) that they must identify the IPAP that sent the notice.
Additionally, the second to last paragraph of the notice misinforms customers about internet account suspension, stating: “the Copyright Tribunal has the authority to … apply to the District Court to suspend your account for any period up to six months”. Account suspension is not currently an available punishment.
The requirements for notices and punishments are spelled out quite clearly, so I wonder what else copyright holders and IPAPs are doing incorrectly.
Delivery of infringement notices
There’s some really interesting discussions over on the 3StrikesNZ forum.
The nature of delivery of infringement notices has been brought up. FlyingPete suggests that email is unreliable for the delivery of such important notices (as in missing them could cost the account holder $15k), because of spam filters and because some people don’t check email accounts very often.
StuFlemingWIC, from an IPAP, points out that even snail mail is unreliable, especially when sent to student flats. He suggests that registered mail would have been a good requirement for sending notices.
Not with Photoshop (and apparently Paint Shop Pro), or your printer, anyway.
The counterfeit deterrence system
If you try to open an image of specific currencies (and I assume at a specific resolution or higher) in Photoshop, you’ll receive the same error message as above. It’s interesting to note that New Zealand’s money isn’t blocked from being opened. Probably because we’re too busy trying to stop our passports from being counterfeited.
So what if your counterfeiting plans were going well so far, and now you’re at a standstill because of Adobe? You can use Gimp. It opens banknotes without trouble. So do old versions of Photoshop. And Microsoft Paint.
Why did Adobe think it was a good idea to add this? Counterfeiters will already know that they can use an older version of Photoshop, or use other software to get around this additional ‘feature’ and will be doing that.
All Adobe is doing is pissing off people who are trying to use Photoshop for a legitimate reason.
The Rules For Use website the dialog box directs users to even lists situations where you can reproduce banknotes legally (e.g. at a certain size), but Photoshop blocks opening banknotes full stop.
Why is it included?
Adobe will have had to spend time and money on including this system, with no returns in the form of additional sales. I assume they were pressured to include it, or even paid to include it by the Central Bank Counterfeit Deterrence Group.
“The inner workings of the counterfeit deterrence system are so secret that not even Adobe is privy to them. The Central Bank Counterfeit Deterrence Group provides the software as a black box without revealing its precise inner workings, Connor said.”
If you’ve bought Photoshop, were you aware of this system at the time of sale? You bought the software to open and edit images, but there are limitations you wouldn’t have been told about.
Here’s the two places where this system is talked about on Adobe’s website. A forum post and the information post linked to above.
Where’s the information page linked to from on Adobe’s website? My guess is not very many places, because they should have come up in the search too.
Printers are in on this too
I tried to print United States banknotes from Banknotes.com too. And the job failed. Here’s a New Zealand banknote that printed (and scanned) fine, with one of the United States notes below, which stopped printing halfway through.
Here’s the error message in the print dialog.
Error 9707 seems to be specific to the counterfeit deterrence system, but is only described as “reading pixels failed”.
BNZ specifies an interesting use for your Eftpos card PIN that’s not permitted in their newest card terms and conditions – using it for the lock code on your phone.
1.5 PIN selection … Your PIN should not be used for any other purpose including your lock/unlock code for your mobile phone.
In the new card letter they also make an interesting comparison of PINs to electronic signatures. But I think their next sentence shows why this is a potentially confusing example to give:
“When selecting a PIN please remember that this is your electronic signature. You must not keep a written record of your PIN, give your PIN to any other person or select a PIN that can be readily associated with you such as birth dates, addresses, parts of telephone numbers, car registrations, sequential numbers (eg 1234, 9999) or any other easily found personal information.”
Signatures are often written down, given away and are made up of personal information. Perhaps there is a better comparison available?
Foodstuffs/New World are using RFID technology on trolleys to track customer movement around the store.
Yes they are RFID receivers designed to pick up the signals from the front of most of our trolleys (although they are not currently active due to an issue with the some of the receivers). The project is being done by Foodstuffs so that they can better understand customer movements around the store. This will enable them to design better supermarkets in the future.
>Hi > >I noticed Symbol(?) units installed on the ceiling in the store. I’m just curious as to what they are for. Are they using RFID technology? > >Kind regards > > >Matt Taylor
Because it’s a great excuse for an internet censorship machine.
This isn’t a debate about whether child sex abuse is right or wrong. You know it’s wrong, I know it’s wrong, we all know it’s wrong. This is a debate about censorship.
New Zealand has an internet blacklist. A list of content that, if your internet service provider has decided to be part of the filtering project, you can’t access. Images of child sexual abuse are meant to be the only stuff blocked, but the list is secret, censorship decisions happen in private and if international experience is anything to go by, other content has a habit of turning up blacklisted.
“The filtering system is also a tool to raise the public’s awareness of this type of offending and the harm caused to victims. The Group agreed that this particular aspect of the filter needs to be more clearly conveyed to the public.”
So basically, it’s to make it seem like they’re doing something, because it doesn’t actually prevent people from accessing child sex abuse images.
A list of objectionable sites is maintained by the Department. If someone using an ISP that’s participating in the filter tries to access an IP address on the filter list, they’ll be directed to the Department’s system. The full URL will then be checked against the filtering list. If the URL has been filtered, users end up at this page. The user can appeal for the site to be unfiltered, but no appeals have been successful yet (and some of the things people have typed into the appeal form are actually quite disturbing).
Is my internet being filtered?
The internet of 2.2 million ISP clients is being filtered.
I assume, for the ISPs providing a mobile data service, the filter is being applied there too.
Why the filter is stupid
Child pornography is not something someone stumbles upon on the internet. Ask anyone who has used the internet whether they have innocently stumbled upon it. They won’t have.
It’s easy to get around. The filter doesn’t target protocols other than HTTP. Email, P2P, newsgroups, FTP, IRC, instant messaging and basic HTTPS encryption all go straight past the filter, regardless of content. Here’s NetClean’s brochure on WhiteBox (pdf), and another (pdf). Slightly more technical, but still basic tools like TOR also punch holes in the filter. The filter is not stopping anyone who actually wants to view this kind of material.
A much more effective use of time and money is to try to get the sites removed from the internet, or you know, track down the people sharing the material. Attempts to remove child sex abuse material from web hosts will be supported by a large majority of hosts and overseas law enforcement offices.
It is clear that the DIA don’t do this regularly. They’re more concerned with creating a list of URLs.
“Additionally 18% of the users originated from search engines such as google images.”
Google would take down child sex abuse images from search results extremely fast if they were made aware of them. And it is actually extremely irresponsible for the DIA not to report those images to Google.
Update: The DIA say they used Google Images as an example, and that they do let Google know about content they are linking to.
“The CleanFeed [the DIA uses NetClean, not Cleanfeed] design is intended to be extremely precise in what it blocks, but to keep costs under control this has been achieved by treating some traffic specially. This special treatment can be detected by end users and this means that the system can be used as an oracle to efficiently locate illegal websites. This runs counter to its high level policy objectives.” Richard Clayton, Failures in a Hybrid Content Blocking System (pdf).
It might be possible to use the filter to determine a list of blocked sites, thus making the filter a directory or oracle for child sex content (however, it’s unlikely people interested in this sort of content actually need a list). Theoretically one could scan IP addresses of a web hosting service with a reputation for hosting illegal material (the IWF have said that 25% of all websites on their list are located in Russia, so a Russian web host could be a good try). Responses from that scan could give up IP addresses being intercepted by the filter. Using a reverse lookup directory, domain names could be discovered that are being directed through the filter. However, a domain doesn’t have to contain only offending content to be sent through the DIA’s system. Work may be needed to drill down to the actual offending content on the site. But this would substantially reduce the effort of locating offending content.
Child sex abuse sites could identify DIA access to sites and provide innocuous images to the DIA and child sex abuse images to everyone else. It is possible that this approach is already happening overseas. The Internet Watch Foundation who run the UK’s list say in their 2010 annual report that “88.7% of all reports allegedly concerned child sexual abuse content and 34.4% were confirmed as such by our analysts”.
Someone could just use an ISP not participating in the filter. However people searching for this content likely know they can be traced and will likely be using proxies etc. anyway. Using proxies means they could access filtered sites through an ISP participating in the filter as well.
It is hard (practically, and mentally) for three people to keep on top of child sex abuse sites that, one would assume, change locations at a frequent pace, while, apparently, reviewing every site on the list monthly.
“The system also will not remove illegal content from its location on the Internet, nor prosecute the creators or intentional consumers of this material.” and that
“The risk of inadvertent exposure to child sexual abuse images is low.”
The Code of Practice says:
“6.1 During the course of the filtering process the filtering system will log data related to the website requested, the identity of the ISP that the request was directed from, and the requester’s IP address. 6.2 The system will anonymise the IP address of each person requesting a website on the filtering list and no information enabling the identification of an individual will be stored.”
“6.5 Data shall not be used in support of any investigation or enforcement activity undertaken by the Department.” and that
“5.4 The process for the submission of an appeal shall: • be expressed and presented in clear and conspicuous manner; • ensure the privacy of the requester is maintained by allowing an appeal to be lodged anonymously.”
Anonymity seems to be a pretty key message throughout the Code of Practice.
“When a request to access a website on the filtering list is blocked the system retains the IP address of the computer from which the request originated. This information is retained for up to 30 days for system maintenance releases and then deleted.” [emphasis mine]
Update: The DIA says that the IP address is changed to 0.0.0.0 by the system.
The site that people are directed to when they try to access a URL on the blacklist (http://dce.net.nz) is using Google Analytics. The DIA talk the talk about the privacy and anonymity around the filter, but they don’t walk the walk by sending information about New Zealand internet users to Google in the United States. It’s possible this is how the DIA gets the data on device type etc. that they use in their reports. Because anyone can simply visit the site (like me, just now) those statistics wouldn’t be accurate.
“Andrew Bowater asked whether the Censorship Compliance Unit can identify whether a person who is being prosecuted has been blocked by the filtering system. Using the hash value of the filtering system’s blocking page, Inspectors of Publications now check seized computers to see if it has been blocked by the filtering system. The Department has yet to come across an offender that has been blocked by the filter.”
I’m not exactly sure what they mean by hash value, but this would seem to violate the “no information enabling the identification of an individual will be stored” principle.
Update: They are searching for the fingerprint of content displayed by the blocking page. It doesn’t seem like they could match up specific URL requests, just that the computer had visited the blocking page.
Where did these 6500 URLs disappear to (or more accurately, why did they disappear?). What was being erroneously blocked during the trial period, or was 7000 just a nice number to throw around to exaggerate the likelihood of coming across child sex abuse images (though, even with 7k sites, the likelihood still would have been tiny)?
Firstly, we weren’t going to have a filter at all:
“The technology for internet filtering causes delays for all internet users. And unfortunately those who are determined to get around any filter will find a way to do so. Our view is that educating kids and parents about being safe on the internet is the best way of tackling the problem.”’
“Aware that the inclusion of drawings or computer generated images of child sexual abuse may be considered controversial, officials advised that there are 30 such websites on the filtering list [that number is now higher, 82 as of December 2011]. Nic McCully advised that officials had submitted computer generated images for classification and she considered that only objectionable images were being filtered.”
The arguments around re-victimization kind of fall apart when you’re talking about a drawing.
“The Group was asked to look at a child model website in Russia. The young girl featured on the site appears in a series of 43 photo galleries that can be viewed for free. Apparently the series started when the girl was approximately 9 years old, with the latest photographs showing her at about 12 years old. The members’ part of the site contains more explicit photos and the ability to make specific requests. While the front page of the website is not objectionable, the Group agreed that the whole purpose of the site is to exploit a child and the site can be added to the filter list.”
Clearly illegal, objectionable images of child sexual abuse? No, but we think it should be filtered so we went and did that.
The DIA was secretive about the filter being introduced in the first place. Their first press release about it was two years after a trial of the system started. I wonder how many of those customers using an ISP participating in the trial knew their internet was being filtered during that time?
The Independent Reference Group is more interesting than independent. Steve O’Brien is a member of the group. He’s the manager of the Censorship Compliance Unit. To illustrate this huge conflict of interest, he is the one who replies to Official Information Act requests about the filter. Because the Censorship Compliance Unit operate it.
“The Group was advised that the issue of Steve O’Brien’s membership had been raised in correspondence with the Minister and the Department. Steve O’Brien offered to step down if that was the wish of the Group and offered to leave the room to allow a discussion of the matter. The Group agreed that Steve O’Brien’s continued membership makes sense.” [emphasis mine]
That was the only explanation given. That it makes sense that he is a member. Of the group that is meant to be independent.
Additionally, the DIA seems to have accidentally deleted some reports that they should have been keeping.
“Last year we used the Official Information Act to ask for copies of the reports that the inspectors [have] used to justify banning the websites on the list. The DIA refused. After we appealed this refusal to the Ombudsman, the DIA then said that those records had been deleted and therefore it was impossible for them to give them to us anyway. The Department has an obligation under the Public Records Act to keep such information.
We complained to the Chief Archivist, who investigated and confirmed that the DIA had deleted public records without permission. He told us that the DIA has promised to do better in the future, but naturally this didn’t help us access the missing records.”
The Code of Practice says:
“4.3 The list will be reviewed monthly, to ensure that it is up to date and that the possibility of false positives is removed. Inspectors of Publications will examine each site to ensure that it continues to meet the criteria for inclusion on the filtering list.”
It’s unlikely this actually happens.
Here’s some statistics of how many URLs have been removed.
December 2011 267 removed
August 2011 0 removed
April 2011 108 removed
It’s impossible that between April and August there were no URLs to remove.
“The list has been completely reviewed and sites that are no longer accessible or applicable (due to the removal of Child Exploitation Material) have been removed.”
The Independent Reference Group has the power to review sites themselves. But in at least one case, they chose not to:
“Members of the Group were invited to identify any website that they wish to review. They declined to do so at this stage.”
The filter isn’t covered by existing law and didn’t pass through Parliament. Appropriate checks and balances have not taken place. The DIA did this on their own.
By law, the Classification Office has to publish its decisions, which they do. The DIA’s filter isn’t covered under any law, and they refuse to release their list. The DIA say that people could use the list to commit crimes, but the people looking for this material will have already found it.
What if the purpose of the filter changes? The DIA introduced it without a law change, the DIA can change it without a law change. What if they say “if ISPs don’t like it, they can opt out of the filter”? How many ISPs will quit?
The only positive is that the filter is opt in for ISPs. Please support the ISPs that aren’t using the filter. Support them when they’re accused of condoning child pornography, and support them when someone in government decides that the filter should be compulsory for all ISPs.
Side note: why does all of the software on the DIA’s family protection list, bar one, cost money? There is some excellent, or arguably better, free software available. There’s even a free version of SiteAdvisor, but the DIA link to the paid one. Keep in mind that spying on your kids is creepy. Talk to them, don’t spy. The video for Norton Online Family hilariously and ironically goes from saying “This collaborative approach makes more sense than simply spying on your child’s internet habits [sitting down and talking — which is absolutely correct]” to talking about tracking web sites visited, search history, social networking profiles, chat conversations and then how they can email you all about them. Seriously. Stay away.
Yahoo recommends a number of things to improve web page performance. There’s a couple of WordPress plugins that help with image performance.
Yahoo has a service called Smush.it that optimizes images. The WordPress plugin, WP Smush.it, reduces the file size of most images added to posts and pages automatically behind the scenes. It’s non-lossy, so the look and quality of images isn’t altered. Through the Media Library, existing images can be run through Smush.it.
Something else Yahoo recommends is not scaling images using HTML code. The plugin Image Pro lets you resize images in the editor, by dragging the corners, like you’d do in Word. The resized version is saved as a separate image so in the post it isn’t scaled using HTML. However, sometimes there’s a slight quality loss using the plugin.
On January 18 the users and companies of the internet rallied together to protest against SOPA and PIPA, bills that would censor the internet. Check out the numbers. It worked. Here‘s part of a huge list, with even bigger names on it of the sites that participated in the blackout. Google, Wikipedia, Reddit, BoingBoing and Wired are among them. Here’s the page Wikipedia displayed. The Wikipedia page about SOPA and PIPA was accessed more than 162 million times during the 24 hours the site was blacked out. More than eight million people looked up their elected representatives’ contact information via Wikipedia’s tool, crashing the Senate’s website. At one point, 1% of all tweets on Twitter included the #wikipediablackout hashtag.
It is likely the bills will be back in one form or another:
What’s the best way for me to help? (for U.S. citizens)
The most effective action you can take is to call your representatives [phone calls have the most impact] in both houses of Congress, and tell them you oppose SOPA, PIPA, and the thinking behind them.
What’s the best way for me to help? (for non-U.S. citizens)
Contact your country’s Ministry of Foreign Affairs or similar government agency. Tell them you oppose SOPA and PIPA, and any similar legislation. SOPA and PIPA will affect websites outside of the United States, and even sites inside the United States (like Wikipedia) that also affect non-American readers — like you. Calling your own government will also let them know you don’t want them to create their own bad anti-Internet legislation.
Megaupload’s website was taken down a day after the protest (without trial), with related people being arrested in New Zealand, and property confiscated. Are we okay with helping enforce US copyright law which, as SOPA and PIPA shows is heavily influenced by the entertainment industry? Is this what extradition should be used for?
It appears, at first glance, that Megaupload was removing infringing material on request. Although it seems their take down procedure was molded around the way they store files–only storing one copy of it if it is uploaded more than once, but giving out a unique URL for the file.
Megaupload has many similarities to other websites, which makes this concerning. It was definitely used for legitimate and legal purposes by legitimate users.