Jump to content

ROSS RANTS: ROBOT JOBS

Recommended Posts

Just to note the biggest problem with trying to make human-like AI, do you know how much processing power your brain has? They managed to replicate brain activity with the K Supercomputer in Kobe, Japan. It took the combined power of all 82,944 processors 40 minutes to simulate 1% of 1 second of brain activity. As in, the human brain is TWO HUNDRED FORTY THOUSAND times faster than the world's most powerful super computer. Quick and dirty math says it would take NINETEEN BILLION, NINE HUNDRED AND SIX MILLION, FIVE HUNDRED AND SIXTY THOUSAND processors to match our brain, and 2.4 billion watts to run them all. That's a nuclear reactor right there, where a human brain runs off of 20 watts and gets them out of metabolising GLUCOSE, for fuck's sake.

 

What was it Ross said? That once you reach a certain point, there are really huge barriers and you need to contend with reality? Yeah, that's the understatement of the fucking century. Replicating this machine is just not possible.

 

And there's no point, either. Why would you make a computer that has 240,000 times the power of our greatest super-computer, and impede its abilities so much it has trouble doing long division? That's just fucking stupid, nobody would ever do that. Human-level AI is not only impossible, it's extraordinarily wasteful and pointless.

"Reality has a well-known liberal bias." -Stephen Colbert.

Share this post


Link to post

Admittedly, we've recently fallen behind Moore's law, but if Moore's law did hold true, then that would suggest that in ~30 years, our top supercomputer will be able to run a human brain in real time.

 

The fact that human brains are notoriously inefficient with their massive processing power isn't really an argument in your favor. This simply indicates that we don't need a computer nearly as computationally powerful as a human brain to have an AI more intelligent than a human.

Share this post


Link to post

When Moore wrote that, we thought the human brain was MILLIONS OF TIMES less powerful than it is. We also didn't know there was a hard limit on processing power with all current technologies. We need to create entirely new computing methods to advance much further.

"Reality has a well-known liberal bias." -Stephen Colbert.

Share this post


Link to post

The fact that human brains are notoriously inefficient with their massive processing power isn't really an argument in your favor. This simply indicates that we don't need a computer nearly as computationally powerful as a human brain to have an AI more intelligent than a human.

Yeah all you need to do is strip away all the stupid, little complexities that humans have and there you go. Robots don't need to be as complex as we are to be smarter then us. Hell I'd say the opposite is true. Why would you give Robots dumb human quirks like emotions and preferences? All those quirks that are "Characteristics of humanity" have effectively impeded us as a species and we would be so much better without them.

Edited by Guest (see edit history)

I'm not saying I started the fire. But I most certain poured gasoline on it.

Share this post


Link to post

The fact that human brains are notoriously inefficient with their massive processing power isn't really an argument in your favor. This simply indicates that we don't need a computer nearly as computationally powerful as a human brain to have an AI more intelligent than a human.

The thing is the brain's processing power is mostly subconscious. Just look at the act of getting up and going to the fridge to get something to eat and go back to what you were doing. If you try to translate that over to machine language, that task is hugely complicated; and that's ignoring the aspect of doing that of it's own decision and not a preprogrammed command. Yet we do it with our minds wandering half the time.

 

Though that may be where an AI would hold an "advantage" (depends on how you want to look at it) over the human brain is that it wouldn't have the barrier of a subconscious.

Retired Forum Moderator

Share this post


Link to post

We also have super-extra-double-plus-good compression on all the data we take in. I'd wager that's most of our processing power, right there, just compressing everything because we take in a batshit crazy amount of data, all the time. We also NEED TO, in order to function in the real world as we do. So no, you CANNOT make a robot brain orders of magnitude less powerful by removing these complexities, because what you're cutting is its interpretation of sensory input, analysis of said data, and mega-compression of all of said data.

 

Your "simple" robot would be dumb as shit and have absolutely no perception whatsoever. It would also miss out on our maximum-overcompression and fill up its hard drives in hours while ours last decades. This is a no-go.

Edited by Guest (see edit history)

"Reality has a well-known liberal bias." -Stephen Colbert.

Share this post


Link to post
Emergence is in my opinion the most important philosophical concept the average person can learn about. I'd go so far as to say that learning how r-pentomino works is like learning about the theory of evolution. You can learn the basics in ten minutes, and it doesn't just explain a complicated looking system, but complexity itself.

 

Thing is, I tend to think computers and programs work against emergent properties in themselves. It's inefficient to have data interacting with every other bit of data and also arbitrarily being processed by many different rules. Insanely inefficient. We're talking many orders of magnitude of speed loss for no useful reason. Programmers don't do that. There won't be a point when it's useful to ask a law manipulation program to be so broad in its data correlation that it could even potentially decide judge shopping is despicable. Maybe some year there will be genuinely opinionated systems, but not soon and not by accidental emergence.

An OS is already quite complex. One action affects numerous subroutines very quickly. (Win95 was evidence of this: One program crashes and it brings down the whole system. Ahhh, the bad ol' days.)

 

I'd even say that some bugs are emergent properties: Behavior of individual components will give expected results, but when they interact, you get something different and entirely unexpected.

 

 

 

Just to note the biggest problem with trying to make human-like AI, do you know how much processing power your brain has? They managed to replicate brain activity with the K Supercomputer in Kobe, Japan. It took the combined power of all 82,944 processors 40 minutes to simulate 1% of 1 second of brain activity. As in, the human brain is TWO HUNDRED FORTY THOUSAND times faster than the world's most powerful super computer. Quick and dirty math says it would take NINETEEN BILLION, NINE HUNDRED AND SIX MILLION, FIVE HUNDRED AND SIXTY THOUSAND processors to match our brain, and 2.4 billion watts to run them all. That's a nuclear reactor right there, where a human brain runs off of 20 watts and gets them out of metabolising GLUCOSE, for fuck's sake.

 

What was it Ross said? That once you reach a certain point, there are really huge barriers and you need to contend with reality? Yeah, that's the understatement of the fucking century. Replicating this machine is just not possible.

 

And there's no point, either. Why would you make a computer that has 240,000 times the power of our greatest super-computer, and impede its abilities so much it has trouble doing long division? That's just fucking stupid, nobody would ever do that. Human-level AI is not only impossible, it's extraordinarily wasteful and pointless.

Replicating the machine is not possible. Today. We're still trying to use vacuum tubes to solve a huge computing problem.

That's where I think quantum computers will come in.

 

And to run with your example of power requirements, duplicating a modern PC with vacuum tubes would be enormous.

https://www.youtube.com/watch?v=hQWcIkoqXwg, shows a byte of memory made with vacuum tubes. A phone can have more than a billion transistors in it now.

Past 7 minutes, they give some figures for the number of cubic kilometers it would take to build a modern PC (in 2012) with vacuum tubes. Nevermind the power and cooling requirements.

 

The barrier was vacuum tubes. The revolution was transistors.

Now we're running into the barrier of transistors.

Up next: Qubits. Will they be the ticket to truly intelligent AIs in a small package? Depends on how densely we can pack them. Maybe we need room-temperature superconductors too, working in concert with qubits to be able to make it compact without melting. We'll see. :)

 

 

 

 

The fact that human brains are notoriously inefficient with their massive processing power isn't really an argument in your favor. This simply indicates that we don't need a computer nearly as computationally powerful as a human brain to have an AI more intelligent than a human.

The thing is the brain's processing power is mostly subconscious. Just look at the act of getting up and going to the fridge to get something to eat and go back to what you were doing. If you try to translate that over to machine language, that task is hugely complicated; and that's ignoring the aspect of doing that of it's own decision and not a preprogrammed command. Yet we do it with our minds wandering half the time.

 

Though that may be where an AI would hold an "advantage" (depends on how you want to look at it) over the human brain is that it wouldn't have the barrier of a subconscious.

It may have something analogous to a subconscious though: Background or low-priority processes. Or even separate subprocessors. I work with PIC chips, and they come with multiple internal timers. Rather than tie up the main processor with simple timing functionality, the timers run independently. Same with the incoming data buffer: It has a tiny buffer, only 1 byte, but that consists of multiple bits, which is multiple potential transitions of an incoming voltage. The receiver module handles the operation of translating and storing the incoming byte without the processor's involvement.

With the timer or receiver, they only interrupt the main processor when they finish some specified task. If they never interrupted, the processor wouldn't really know that they're even doing anything.

Edited by Guest (see edit history)

Share this post


Link to post
The barrier was vacuum tubes. The revolution was transistors.

Now we're running into the barrier of transistors.

Up next: Qubits. Will they be the ticket to truly intelligent AIs in a small package? Depends on how densely we can pack them. Maybe we need room-temperature superconductors too, working in concert with qubits to be able to make it compact without melting. We'll see. :)

 

The problem is that quantum computing might not even be POSSIBLE. And our current advancement is the solid state drive, actually, and it too has a hard limit that isn't that much higher. I mean, a couple orders of magnitude but that is NOT enough.

 

Also, any of these generates a significant amount of heat. Packing it into smaller spaces produces more heat. If you don't mind your entire building-sized computer being hotter than the centre of the sun, that's not a problem. But you do mind, what with the whole "destroying itself and everything else around for miles by being so hot it puts out hard rads" thing.

 

The best option would be to just find a way to make an actual human brain and manipulate it for your ends. Find a way to produce just a brain, support it inside a machine, and tweak it to do what you need to. Then we find a way to make it interface directly with a more conventional computer, and we've got all the benefits of a human brain and all the benefits of a supercomputer, in a compact package.

"Reality has a well-known liberal bias." -Stephen Colbert.

Share this post


Link to post
The problem is that quantum computing might not even be POSSIBLE. And our current advancement is the solid state drive, actually, and it too has a hard limit that isn't that much higher. I mean, a couple orders of magnitude but that is NOT enough.

 

Also, any of these generates a significant amount of heat. Packing it into smaller spaces produces more heat. If you don't mind your entire building-sized computer being hotter than the centre of the sun, that's not a problem. But you do mind, what with the whole "destroying itself and everything else around for miles by being so hot it puts out hard rads" thing.

 

The best option would be to just find a way to make an actual human brain and manipulate it for your ends. Find a way to produce just a brain, support it inside a machine, and tweak it to do what you need to. Then we find a way to make it interface directly with a more conventional computer, and we've got all the benefits of a human brain and all the benefits of a supercomputer, in a compact package.

Google's got a very rudimentary quantum computer in testing. It doesn't do a whole lot right now though, but that's probably like saying transistors are useless for computing if you're evaluating a 200-transistor logic gate against a cluster of Core i7 processors.

 

 

Either way, the brain does its processing and it works. I think we can find a way of mimicking or duplicating that functionality in a similar volume; it's clearly not prohibited by the laws of physics, otherwise it simply wouldn't work.

 

 

 

The best option would be to just find a way to make an actual human brain and manipulate it for your ends.
Then you'll end up with ethical problems more quickly than you will with artificial AIs. A factory-default human brain has other firmware problems: Don't give it enough sensory input and it will go insane. Tell it to do something like a computer can do ("stare at this input 24/7 and let me know if it changes for more than 20 milliseconds") and it'll probably still go insane. It might also need to sleep periodically; it seems that the need to sleep was woven into our genome at a very low level. Earth's been rotating for a very long time, and life here adapted to that pretty thoroughly.

Share this post


Link to post

The sleep part I can actually answer. We sleep, near as we can tell, to flush out waste chemicals from our brain. Do that automatically with a support system, and we wouldn't need to sleep. We also repair tissue damage while sleeping, likely just for convenience's sake, but without a body the brain wouldn't need that either.

 

Also, that given task would not require an AI. Why would you want a full AI just to monitor a data stream? That's stupid. If your needs aren't enough to require an AI, don't use an AI. And if they are enough to require an AI, I think keeping it busy will be a non-issue.

"Reality has a well-known liberal bias." -Stephen Colbert.

Share this post


Link to post
The sleep part I can actually answer. We sleep, near as we can tell, to flush out waste chemicals from our brain. Do that automatically with a support system, and we wouldn't need to sleep. We also repair tissue damage while sleeping, likely just for convenience's sake, but without a body the brain wouldn't need that either.
I remember seeing something.....I think it was a TED Talk, that said that certain genes only activate and do their thing during sleep.

That talk made it sound like sleep is still poorly understood.

 

 

 

Also, that given task would not require an AI. Why would you want a full AI just to monitor a data stream? That's stupid. If your needs aren't enough to require an AI, don't use an AI. And if they are enough to require an AI, I think keeping it busy will be a non-issue.
It was just a simplified example to illustrate a point. Tasks where you want that brain-in-a-box might end up being intensely boring to that brain. Stash your brain-in-a-box to operate a robot like that Baxter thing: It'll still do simplistic tasks, but the intelligence adds versatility. It's effectively a way of automating small tasks for low-volume jobs. Normally you want to set up a robot to make thousands upon thousands of things, amortizing the setup time across many units. Robots don't make as much sense for small production runs due to the setup time. An intelligent AI could take instruction easily, or even figure things out on its own. Still boring though. There's a difference between "busy" and "interesting."

 

One summer I worked at a warehouse unloading and palletizing ladders. I was damn busy, but bored senseless. I did sometimes have to kick in at more-than-robot intelligence though to figure out how to unhook ladders from the pile where the neat stacks had shifted during transit. >95% automatable, but with enough in there where you need a brain to keep unstacked ladders from gumming up the works.

 

 

 

I guess with adequate manipulation, maybe you could end up with a mind that readily enjoys or tolerates extreme repetitiveness. It would also need to tolerate an abrupt absence of that stimuli, such as if a machine upstream fails abruptly, without throwing a tantrum.

....reverse-engineer a brain and modify it, along with the ethical complications, or forward-engineer something using artificial means (and the potential eventual ethical complications)...either one's a substantial challenge.

Share this post


Link to post

Okay. For that, use a primate brain or something. Problem solved. Okay, not really, but that's a stupid thing to use an AI for anyway, just use robots all the time and have a little human oversight. I'm done with this conversation at this point. When my fuck supply is replenished maybe I'll give one but right now I'm fresh out.

"Reality has a well-known liberal bias." -Stephen Colbert.

Share this post


Link to post
(And the Bitcoin market might experience disruptions. "...So, someone just made a million bitcoins. Yesterday. Interesting.")

You obviously don't understand how bitcoin mining works...

Don't insult me. I have trained professionals to do that.

Share this post


Link to post
(And the Bitcoin market might experience disruptions. "...So, someone just made a million bitcoins. Yesterday. Interesting.")

You obviously don't understand how bitcoin mining works...

Admittedly not terribly much, so I guess it doesn't work quite like I thought. I did at least check Wiki a bit and saw that there's a limit to the number of bitcoins, so I revised down my "million" figure. I know it involves a lot of calculating, and that you want to have high-end videocards or dedicated ASICs to work on it.

I assume by your tone that the market would somehow have to adapt to someone suddenly entering the marketplace with a computer of unprecedented capabilities by orders of magnitude over a good mining cluster?

 

 

I'll retract the thing on bitcoins then, but still leave standing the problems with current encryption methods. I'm thinking NSA stuff for one thing: Capture data encrypted today, wait until a quantum computer is available, and brute-force stuff they captured and flagged as being of interest. Hopefully these problems are resolved by the time quantum computer tech arrives as something consumers can get their hands on.

Though I'm not entirely confident in that, given that we still have site operators storing passwords either in plain text or using a basic hash that a good videocard cluster can crack without breaking a sweat.

Share this post


Link to post
Admittedly not terribly much, so I guess it doesn't work quite like I thought. I did at least check Wiki a bit and saw that there's a limit to the number of bitcoins, so I revised down my "million" figure. I know it involves a lot of calculating, and that you want to have high-end videocards or dedicated ASICs to work on it.

I assume by your tone that the market would somehow have to adapt to someone suddenly entering the marketplace with a computer of unprecedented capabilities by orders of magnitude over a good mining cluster?

 

 

I'll retract the thing on bitcoins then, but still leave standing the problems with current encryption methods. I'm thinking NSA stuff for one thing: Capture data encrypted today, wait until a quantum computer is available, and brute-force stuff they captured and flagged as being of interest. Hopefully these problems are resolved by the time quantum computer tech arrives as something consumers can get their hands on.

Though I'm not entirely confident in that, given that we still have site operators storing passwords either in plain text or using a basic hash that a good videocard cluster can crack without breaking a sweat.

 

J05tQe2xBB4

"Reality has a well-known liberal bias." -Stephen Colbert.

Share this post


Link to post
bitcoins are a joke

Actually, they are stock that you can earn simply by loaning your CPU cycles to the stock issuing organization. They are traded in the same way stock, and raw materials are on the stock exchange. Unless you think that all forms of stock are also a joke...

Don't insult me. I have trained professionals to do that.

Share this post


Link to post

this is what i have been saying, working at a warehouse for a certain online retailer named after a river. if my ten year plan ended with the same, job i would be out on my ass in five. even if its Tokyo pods and dog-food, we need something for people with no drive and/or vision. we have the better part of a century, hopefully, before robots can dream up new ideas. at this point society is the one dragging down the curve, if we can set up everyone with vision with the opportunities, education and resources to realize their dreams, we can have a golden future.

 

sadly are future is in the course of the film wall.e, but without the space travel. so idiocracy, but fatter

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in the community.

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×
×
  • Create New...

This website uses cookies, as do most websites since the 90s. By using this site, you consent to cookies. We have to say this or we get in trouble. Learn more.