It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
Moore's law is dead. There is, as I understand, no way to shrink transistors any smaller.

But, there are lateral ways of thinking; ways of getting further speed. I've gleaned a few of them, but which would you pick as the future of computing as we know it?

3D Processor? Quantum Computing? Photonic/optronic computers? Carbon nanotube field-effect transistor? Maybe you've some other idea in mind! I'd like to hear them.

What I would disqualify:
AI: This doesn't move anything faster.
Brain-Machine interface: This is stupid and dangerous; plus it doesn't actually move data faster.
Blockchain: This is literally worse computing.
Parallel/Multiple GPUs made easier and more efficient for customers.
to change the speed you need to change one of the variables like air or gravity because without resistance your max speed becomes literally lightspeed 1*

* currently thats "theoretical maximum" because we don't have any testable way to remove all resistance but thats the basic idea
Post edited May 08, 2024 by ussnorway
This might sound crazy, but lately i have been open to the idea of slime mold processors/computers.
The future is steam.
(No, not the kind of "Steam" with a capital S. -.- )
I mean micro clockwork computing.
avatar
g2222: The future is steam.
(No, not the kind of "Steam" with a capital S. -.- )
I mean micro clockwork computing.
Back to the Babbage engines?
avatar
00063: This might sound crazy, but lately i have been open to the idea of slime mold processors/computers.
Go on.
avatar
ppavee: Parallel/Multiple GPUs made easier and more efficient for customers.
Iiiii don't think that does anything. There's only so much bus width to throw polygons into.
Post edited May 08, 2024 by dnovraD
Thin clients. aka. today’s generic desktop/laptop computing and gaming moving to data centers.

Low power and specialized workloads staying local. Just called “Edge Computing” so it doesn’t sound old school.
avatar
dnovraD: There is, as I understand, no way to shrink transistors any smaller.
That sentence had me instantly thinking of "1941", and the Japanese soldier, who tries to get one of these old mantlepiece radios through the entry hatch of the submarine: "We've got to figure out how to make these things smaller!"

XD
Hear me out, Macro Processors.
Mentats.
Barring any new breakthroughs in the materials used in CPUs to allow increasing their clock speed, we likely would need to become a lot more efficient at splitting as many things as possible to run on separate cores without being plagued with too severe sync related issues.

Or we need far better compilers that can continue to take in the crappy higher level code humans tend to produce nowadays and still achieve near the same optimization level that a team of very dedicated expert assembler coders could reach.
Like way smarter heads than mine said, improved efficiency of both code and hardware. Teach programmers to optimize again and you can get massive speed increases on the same hardware. And make hardware that uses less power produces less heat and you'll be able to add more parallel units into the same package.
And, of course, teach people that enough is enough and we don't necessarily need a next "big" thing.
Nothing truly interesting & groundbreaking is going to happen in computing until if & when they figure out how to make VR that is 100% indistinguishable from living in the real world, and/or they start selling androids that are 100% indistinguishable from real humans (and at a cheap enough price that the average person could afford to buy one).
Biologics.
avatar
OldFatGuy: Biologics.
Meat computers?