Jump to content


  • Posts

  • Joined

  • Last visited

  1. You obviously don't understand how bitcoin mining works... Admittedly not terribly much, so I guess it doesn't work quite like I thought. I did at least check Wiki a bit and saw that there's a limit to the number of bitcoins, so I revised down my "million" figure. I know it involves a lot of calculating, and that you want to have high-end videocards or dedicated ASICs to work on it.I assume by your tone that the market would somehow have to adapt to someone suddenly entering the marketplace with a computer of unprecedented capabilities by orders of magnitude over a good mining cluster? I'll retract the thing on bitcoins then, but still leave standing the problems with current encryption methods. I'm thinking NSA stuff for one thing: Capture data encrypted today, wait until a quantum computer is available, and brute-force stuff they captured and flagged as being of interest. Hopefully these problems are resolved by the time quantum computer tech arrives as something consumers can get their hands on. Though I'm not entirely confident in that, given that we still have site operators storing passwords either in plain text or using a basic hash that a good videocard cluster can crack without breaking a sweat.
  2. I remember seeing something.....I think it was a TED Talk, that said that certain genes only activate and do their thing during sleep.That talk made it sound like sleep is still poorly understood. It was just a simplified example to illustrate a point. Tasks where you want that brain-in-a-box might end up being intensely boring to that brain. Stash your brain-in-a-box to operate a robot like that Baxter thing: It'll still do simplistic tasks, but the intelligence adds versatility. It's effectively a way of automating small tasks for low-volume jobs. Normally you want to set up a robot to make thousands upon thousands of things, amortizing the setup time across many units. Robots don't make as much sense for small production runs due to the setup time. An intelligent AI could take instruction easily, or even figure things out on its own. Still boring though. There's a difference between "busy" and "interesting." One summer I worked at a warehouse unloading and palletizing ladders. I was damn busy, but bored senseless. I did sometimes have to kick in at more-than-robot intelligence though to figure out how to unhook ladders from the pile where the neat stacks had shifted during transit. >95% automatable, but with enough in there where you need a brain to keep unstacked ladders from gumming up the works. I guess with adequate manipulation, maybe you could end up with a mind that readily enjoys or tolerates extreme repetitiveness. It would also need to tolerate an abrupt absence of that stimuli, such as if a machine upstream fails abruptly, without throwing a tantrum. ....reverse-engineer a brain and modify it, along with the ethical complications, or forward-engineer something using artificial means (and the potential eventual ethical complications)...either one's a substantial challenge.
  3. Google's got a very rudimentary quantum computer in testing. It doesn't do a whole lot right now though, but that's probably like saying transistors are useless for computing if you're evaluating a 200-transistor logic gate against a cluster of Core i7 processors. Either way, the brain does its processing and it works. I think we can find a way of mimicking or duplicating that functionality in a similar volume; it's clearly not prohibited by the laws of physics, otherwise it simply wouldn't work. Then you'll end up with ethical problems more quickly than you will with artificial AIs. A factory-default human brain has other firmware problems: Don't give it enough sensory input and it will go insane. Tell it to do something like a computer can do ("stare at this input 24/7 and let me know if it changes for more than 20 milliseconds") and it'll probably still go insane. It might also need to sleep periodically; it seems that the need to sleep was woven into our genome at a very low level. Earth's been rotating for a very long time, and life here adapted to that pretty thoroughly.
  4. An OS is already quite complex. One action affects numerous subroutines very quickly. (Win95 was evidence of this: One program crashes and it brings down the whole system. Ahhh, the bad ol' days.) I'd even say that some bugs are emergent properties: Behavior of individual components will give expected results, but when they interact, you get something different and entirely unexpected. Replicating the machine is not possible. Today. We're still trying to use vacuum tubes to solve a huge computing problem.That's where I think quantum computers will come in. And to run with your example of power requirements, duplicating a modern PC with vacuum tubes would be enormous. , shows a byte of memory made with vacuum tubes. A phone can have more than a billion transistors in it now.Past 7 minutes, they give some figures for the number of cubic kilometers it would take to build a modern PC (in 2012) with vacuum tubes. Nevermind the power and cooling requirements. The barrier was vacuum tubes. The revolution was transistors. Now we're running into the barrier of transistors. Up next: Qubits. Will they be the ticket to truly intelligent AIs in a small package? Depends on how densely we can pack them. Maybe we need room-temperature superconductors too, working in concert with qubits to be able to make it compact without melting. We'll see. The thing is the brain's processing power is mostly subconscious. Just look at the act of getting up and going to the fridge to get something to eat and go back to what you were doing. If you try to translate that over to machine language, that task is hugely complicated; and that's ignoring the aspect of doing that of it's own decision and not a preprogrammed command. Yet we do it with our minds wandering half the time. Though that may be where an AI would hold an "advantage" (depends on how you want to look at it) over the human brain is that it wouldn't have the barrier of a subconscious. It may have something analogous to a subconscious though: Background or low-priority processes. Or even separate subprocessors. I work with PIC chips, and they come with multiple internal timers. Rather than tie up the main processor with simple timing functionality, the timers run independently. Same with the incoming data buffer: It has a tiny buffer, only 1 byte, but that consists of multiple bits, which is multiple potential transitions of an incoming voltage. The receiver module handles the operation of translating and storing the incoming byte without the processor's involvement.With the timer or receiver, they only interrupt the main processor when they finish some specified task. If they never interrupted, the processor wouldn't really know that they're even doing anything.
  5. "We do not need to worry about robots becoming self-aware and overthrowing humanity. It's just not going to happen." I'm also siding with the rich tech entrepreneurs on this. - Self-awareness: Intelligence involves being able to hold a model of your environment in memory, and relate that model and reality. If the model includes the concept of its own memories as part of the model, that is the start of self-awareness. - "Computers are really good at recognizing patterns, and carrying out instructions based on those patters. Now we have made enormous leaps in AI over the past few decades, but there are limits." And then there were some things said about enormous barriers preventing progress beyond a certain point. Simple machines, mechanical computers, vacuum tubes, transistors: Each was a breakthrough that permitted some manner of revolutionary progress. What's next? Quantum computers are making tiny steps of progress. One big disruption they'll provide first is encryption: Things that would take a regular computer millions of years might take minutes, or less. (And the Bitcoin market might experience disruptions. "...So, someone just made a million bitcoins. Yesterday. Interesting.") They'll be an enormous leap in computing power, and should get us the amount of power we need to make something rivaling a biological brain. Concerning recognizing patterns: Yes, this is the path to intelligence and self-awareness. We recognize patterns well too. It's part of the reason we program AIs to do the same thing: It works for us, it's how we operate, and since we know how adequately that works for us, let's use it as a baseline. Your brain has the big advantage of having a huge number of neurons that can form a lot of connections. (Kind of what a quantum computer is heading for, with each individual element capable of being in multiple states simultaneously.) The final piece is this: Emergent property. If you have a complex system, you get behaviors you didn't plan on, caused by unexpected interactions of various components. This is where you can get things like intuition or even emotional states. If you're emotional, your thoughts become predisposed to follow a certain pattern. A whole society can be prone to this sort of effect, wherein the "mood" of the whole changes over time. I think you will encounter this sort of thing in a sufficiently complex computer system, to the point that it would not be easily differentiated from how a human mind works. (I just don't think a human brain is all that big or amazing of a deal. It's got a huge number of neurons stuck together in a reasonably functional manner, and that complexity and numbering makes it difficult to simplify/model and still properly understand. It lacks any good debug port to plug into, so it remains mysterious as a metaphoric "black box." It's also been under development at the molecular and cellular level for a time period that's orders of magnitude longer than we've been working on our little computer systems.) - Overthrowing humanity? You don't need intelligence to do that. There are numerous ways of killing, terrifying, and manipulating large numbers of human beings using methods that focus on our basic needs and instincts. But let's assume it is intelligence at work, since we're talking about AI. Would it want power or control? That depends on how it's raised. If you raise a human child to be a remorseless killer primarily suited for combat, you shouldn't be surprised if it starts killing the "wrong" people, nor should you be surprised if it reacts poorly if you tell it that it needs to stop doing what you raised it to do.
  • Create New...

This website uses cookies, as do most websites since the 90s. By using this site, you consent to cookies. We have to say this or we get in trouble. Learn more.