iFixit CEO Blasts Anthropic for “1 Million Server Attacks in 24 Hours”; AI Company's Own Chatbot Also Disapproved

General
iFixit CEO Blasts Anthropic for “1 Million Server Attacks in 24 Hours”; AI Company's Own Chatbot Also Disapproved

Almost every day, it seems, there is a new reason to be irritated by AI. The reasons spring from the primal soup of the festering unconscious and are illuminated by the light of anger. The impact on the economy, job thieves, copyrighted content thieves, and the simple fact that a lifeless neural network can convince someone of their consciousness is ridiculous. And now there is another reason to be angry at ethereal AI.

Sure, we knew there were companies scraping websites to train their AI models. But we hadn't thought about the impact this would have on the servers running these websites. iFixit CEO Kyle Wiens is here to inform us all that this is indeed happening to the AI company Anthropic: ” Do you have to hit your servers a million times in 24 hours?

Unless Wiens is exaggerating, it is not surprising that this would be “hitting develop resources”; a million “hits” per day would be enough to justify no small amount of annoyance.

The problem is that this bandwidth chugging becomes even more ridiculous when put into context: not only do AI companies appear to be straining server resources, but they are explicitly prohibited from using the content on their servers.

iFixit's terms of use state that “copying or distributing any content, materials, or design elements on this site for any purpose, including machine learning or training AI models, is strictly prohibited without the express prior written permission of iFixit.” So there should be no reason for an AI company to attack iFixit's site. iFixit has no intention of using the scraped data for these purposes, just... Do they want us to believe they are doing it for fun?

In any case, iFixit's Wiens decided to ask Claude, Anthropic's AI, about this. Claude seems to agree with iFixit. When asked what to do if you are training a machine learning model and find something like the above in the terms of service, iFixit clearly said, “Do not use that content.”

This, as Wiens points out, can be found by simply accessing the Terms of Service. This leads me to believe that at least some AI companies would rather beg forgiveness than permission, and hence would not bother to check the ToS in the first place.

As an aside, the iFixit website has Claude disallowed in its robots.txt file, which explicitly forbids crawling (unfortunately, any “bad bots” can be ignored). This entry may have already been there, but I would imagine iFixit added it to make a statement against naughty bots.

Categories