They can still have the misaligned incentive problem due to the current voting-age population's interests misaligning with the interests of the unborn.
The unborn have no political power to get laws passed whether the law comes into force today or in a hundred years.
There's zero incentive for a congressman to put work into progressing a bill that does something in a hundred in which nobody has an interest.
And just because ballot initiatives can be started, doesn't mean they're going to pass into law.
You claimed that there's something specific that preventing them from being passed into law, namely the opposition of Republicans and Democrats. Those groups control the legislature but not ballot initiatives.
Actually addressing your specific example is not strawmanning.
Concretely, someone making a ballot initiative to change the voting system has an option to write in a provision that it will only kick into effect in X years in the future. In practice, I would expect that this would not help the ballot process to get passed but more likely reduce it's chances of success because it demotivates the participants.
There are cases where there's a consensus among lawmakers that it would be good to do something and there's current opposition that can be circumvented by writing the law into the future, but the strong clarity among lawmakers that something would be good does not exist in the examples that you mentioned.
In Berlin where I live, a reform that reduced the amount of districts in Berlin and thus the number of majors of districts is one example. It was written into the future to get less opposition from people currently being in the district governance. This was possible because there was a strong political will to do government reform to cut costs within the governing coalition.
There exists no such will in the kind of cases you mentioned.
In government, it’s not just a matter of having the best policy; it’s about getting enough votes. This creates a problem when the self-interests of individual voters don’t match the best interests of the country.
I think that's a model that's not sophisticated when you want to think about getting laws passed.
For instance, voting researchers widely consider the presidential voting system in America to be inferior to many alternatives. But if you want to change it, you require consent from Democrats and Republicans—i.e. the very people who benefit from the status quo.
That claim seems false. Ballot initiatives to change voting systems are possible in many systems and state law dictates how electors for the president are chosen and it's possible for some states to use a different voting system.
The main problem with changing the voting system is that most people don't care about the voting system.
One common view is that most of the arguing about politics will be done by AI once AI is powerful enough to surpass human capabilities by a lot. The values that are programmed into the AI matter a lot.
Ah, by "producing GPUs" I thought you meant physical manufacturing.
Yes, that's not just about new generations of fab equipment.
GPU performance for training models did increase faster than Moore's law over the last decade. It's not something where the curve of improvement is slow even without AI.
Perhaps, as we move toward more and more complex and open-ended problems, it will get harder and harder to leave humans in the dust?
A key issue with training AIs for open-ended problems is that's a lot harder to create good training data for open-ended problems then it is to create high-quality training data for a game with clear rules.
It's worth noting that one of the problems where humans outperform computers right now are not really the open-ended tasks but things like how to fold laundry.
A key difference between playing go well and being able to fold laundry well is that training data is easier to come by for go.
If you look at the quality that a lot of professionals make when it comes to a lot of decisions involving probability (meaning there's a lot of uncertainty) they are pretty bad.
Sure. I'm just suggesting that the self-improvement feedback loop would be slower here, because designing and deploying a new generation of fab equipment has a much longer cycle time than training a new model, no?
You don't need a new generation of fab equipment to make advances in GPU design. A lot of improvements of the last few years were not about having constantly a new generation of fab equipment.
I think you basically ignore the existing wisdom of what limits the size of firms and try to explain the limits of the size of companies with a model that doesn't tell us very much about how companies work.
We have antitrust laws. There's the Innovator's Dilemma as described by Clayton Christensen that explains why companies decide against doing certain business. Markets often outperform hierarchical decision-making. Uber could be a lot bigger if they would employ all their drivers and own all the vehicles but they rather not do that part of the business and use market dynamics.
Uber would be a lot bigger if they would employ all the drivers as employees. Managing people is often adds inefficiencies. The more layers of management you have in an organization the worse the incentive alignment happens to be.
If you add a bunch of junior programs into a software project it might very well slow the project down because it takes effort for the more experienced programmers to manage the junior programmers. GitHub Copilot on the other hand makes an experienced programmer more productive without adding friction about managing junior employees.
Some technologies eventually encounter fundamental limits. The rocket equation makes it difficult to reach orbit from Earth’s gravity well; if the planet were even moderately larger, it would be nearly impossible. It’s conceivable that some sort of complexity principle makes it increasingly difficult to increase raw intelligence much beyond the human level, as the number of facts to keep in mind and the subtlety of the connections to be made increases[7].
We can look at a skill that's about applying human intelligence like playing Go. It would be possible that the maximum skill level is near what professional go players are able to accomplish. AlphaGo managed to go very much past what humans can accomplish in a very short timeframe and AlphaGo doesn't even do any self-recursive editing of it's own code.
GPU capacity will not be increasing at the same pace as the (virtual) worker population, and we will be running into a lack of superhuman training data, the generally increasing difficulty of progress, and the possibility of a complexity explosion.
AI can help with producing GPU's as well. It's possible to direct a lot more of the worlds economic output into producing GPU's than is currently done.
The general problem you have with allowing everyone to vote on anything is that a parliament takes a lot of votes.
Even a senator whose main job is to be a senator needs a staff to make informed decisions on every vote.
You don't really vote political decisions to be driven by Youtube influencers together with anyone who has the money to run a marketing campaign to frame a given vote.
Liquid Democracy is a way to get people to delegate when they don't have the time to make informed decisions for a given vote. It's worth noting that the Pirate Party in Germany used Liquid Democracy for its party decisions. That didn't stop them when they were in parliament to ofter have half of the Pirate Party parliamentarians when they were in parliament in Berlin vote "yes" and the other half "no".
The system that Aubrey Tang build in Taiwan is also worth looking in when it comes to the prior art.
It's quite unclear what you are proposing. It sounds a bit like you are trying to reinvent Liquid Democracy without knowing the term or any of the practical implementations of it.
No. You are not saying anything about the required initial buy-in by lawmakers in your article and you explicitly suggest that it's a strategy that people who aren't lawmakers can use.
I did not generalize things in the post you linked as well. The other post was about a very explicit claim you made about whether certain governments care about the granularity of policies.