OpenAI might end up on the right side of history
note: I am in MENA, am not with the military in any way.
when i first read the statement by Dario, i was shocked by the fact the military was so dismissive about Ai safety (not to mention privacy). Seeing anthropic resist the military, I felt so proud of being a claude user to the point I deleted gpt right away. it's nice to see your fav products sync with your values.
but today, after thinking more about it, i realized something. for a government to allow one Ai company to dictate terms, it opens up a precedent for Ai companies in the future to resist governmental oversight. that might not be a big deal in 2020s, but in 2030s by all estimations many Ai companies will be big enough to resist entire governmental structures. Maybe not the US or China, but they will definitely be big enough not to be easily influenced.
those independent companies will eventually grow so large, no government can hope to tame them. i know that right now it seems impossible for a mere ccorp valued at less than a trillion to resist a government that spends 7 trillion each year. but zooming out, it feels likely that the next generation of Ai companies will be easily valued at 10T. if you look at a 2-year-old which just learned how to talk and suddenly he starts talking quantum, you can bet your a* he will grow up to be a powerhouse.
i know soft monetary power is very different than hard military power, but enough tokens of the first type can easily be converted into the second type if: 1. you have a sufficiently ambitious CEO. 2. the survival of the company is threatened in some way. I am not talking about AGI here, but good old private equity that does whatever it needs to survive. ruled by suits that have more loyalty to shareholders than anyone or anything else.
at the end of the day, corporations are ruled by dictators (they have to be), governments are not (not in the West at least). maybe just maybe we should NOT trust private equity to seek anything but profits. governments are manipulative and bloody, but at least we can vote.
It's worth separating "refusing a contract" from "resisting oversight" though. Anthropic declining DoD terms is still just a procurement decision, not a power grab, even if the blowback (getting labeled a supply chain risk) makes it feel weightier. The scary version of your concern is whether regulatory frameworks can keep pace with $10T companies, and on that I think you're right that the window is closing faster than governments realize.
even if they were able to keep pace, with time and with more powerful corporations lobbying, the government will not be the same. I really hate writing this, but by all accounts, it seems the future will include million not billions of sapiens.
As long as governments hold a monopoly on violence, no individual or corporation can "resist."
> the next generation of Ai companies will be easily valued at 10T
I'm not sure where this conclusion is coming from. We're very likely already in an AI bubble so I'm thinking that open/free models will eventually dilute the huge ridiculous valuations these companies have. Also the natural increase in consumer hardware power will eventually allow many people to just use local models instead both for privacy and cost reasons.
And seeing as most models are essentially only improved versions of the previous ones with larger context and more training data, unless some new "Attention Is All You Need" paper comes out that will give us a big step into AGI territory, I'm really not seeing a new company reach $10T valuation by just releasing marginally better models every couple of months imho.
I am know you consider yourself a pragmatist but zoom out a little and think about it again.....these idiotic humans built a couple of 1T companies with a stupid genAi algorithm in less than 50 year. in 2100, very high chance they will do 10T
AI right now feels same like the early interne. Lots of hype and skepticism but also real shifts happening underneath. Hard to know exactly how it’ll play out but the impact on how people build tools and products is already pretty clear.
Good thoughts. Welcome to the discourse.
A couple things to put out there - first, the US has a fairly strong rule of law that the government cannot compel speech -- essentially, while speech can be blocked/stopped, it's a hard rule of the republic that we cannot force certain speech - this is the legal theory for canary statements, by the way -- make your statement "I have not been forced to remove any user from this system by a secret court" -- and when it's no longer true, you remove the statement.
This speech concept extends to, say, software - a company can refuse to create software or tooling or what have you, if it chooses. What if a company has something deemed to be in the national security interest but does not wish to use it on behalf of the country? Traditionally we have both soft and hard power applied - soft - conversations, hearts and minds, perhaps threats, aimed at getting a company on board with the national goal.
Hard: Nationalization. The US has typically reserved nationalization for bailout / reworking pernicious economic incentives, but we have had some wartime nationalizations in the past -- Google tells me Western Telegraph and Smith and Wesson -- and Truman nationalized like everything basically whenever he wanted before and during the Korean War.
Nationalizing a valuable company like Anthropic which is research dominated is risky. You can't force research scientists to work; you could almost certainly find people to keep operating the inference. So you may get something today, and trigger a legendary set of Supreme Court cases, but you have no guarantee the goose will keep laying its golden eggs once Sec. Hegseth is in charge. I would guess this is going to be a very, very last resort even for the most aggressive of governments when there are credible alternatives in the economy. Under those terms, economics / market forces can do a lot of the work.
Upshot - I predict this is Sturm und Drang, and we'll see Anthropic figure out how to keep its gov contracts while oAI continues to work its way in to more government work simultaneously.
i agree, most likely they Ai companies will play ball this round.
> it opens up a precedent for Ai companies in the future to resist governmental oversight
A company being allowed to NOT make business with goverment somehow makes oversight impossible? Make it make sense.
USA is already basically controlled by oligarch. The road to there did not went through companies refusing business.
the US is not controlled by oligarch, you just think that because you never experienced what a real oligarch is. you should travel more.