The Future of Computers and Artificial Intelligence

ai-image

In the last 50 years, the advent of computers has radically altered our daily routines and habits. From huge, roomy, terribly expensive and rather useless machines, computers have managed to become quite the opposite of all the above, seeing an exponential growth in the number of units sold and, stunningly, usability as well.

If all of this happened in the first 50 years of computing history, what will happen in the next 50?

Moore’s Law is an empirical formula describing the evolution of computer microprocessors which is often cited to predict future progress in the field, as it’s been proved quite accurate in the past. Simply put, it states that the transistor count in an up-to-date microprocessor will double every 18-24 months, which essentially means that computational processing speed grows exponentially, doubling every 2 years.

But we already have fast computers working with complex applications requiring fairly sophisticated graphics with acceptable CPU usage. So, once we get to that point, what could we use all of that calculating power for?

In the relatively young science of computer algorithms, there is a class called ‘NP-hard problems’ which are also sometimes also referred to as ‘unacceptable’, ‘unsustainable’ or ‘binomially exploding’. These represent a group of algorithms in which complexity grows exponentially over time. An example of an NP-hard algorithm is finding the exit of a labyrinth: it doesn’t require much effort if you only find one crossing, but it gets much more demanding in terms of resources when the crossings become 10, 100, 1000, etc. to the point when it becomes impossible to compute due to limited processing power. Alternatively, it may be computable, but requires an unacceptable amount of processing time.

Many (if not all) artificial intelligence-related algorithms are now extremely demanding in terms of computational resources (they are either NP-hard or involve combinatorial calculus of growing complexity). In addition to the fact that, in the AI domain, an ‘acceptable time’ to return an answer is much shorter than many other cases — you want the machine to answer stimuli as quickly as possible to make it effectively interact with the world around it. Therefore, while it wouldn’t be a definitive solution, the constant progress in terms of computational power could boost progress in the field of AI in a very significant way.

Will we ever be able to accomplish a general purpose artificial intelligence? It’s probably too early to answer, but certainly, if we examine the results of today’s technology, the outlook is positive. Different companies are working on different aspects of this technological dream. Honda is probably the most advanced in terms of hardware mobility and coordination, with their ASIMO robot series, while if we look at the software side, two examples of advanced companies include CyCorp for their impressive knowledge-based language recognition engine, and Novamente in terms of general intelligence.

How long until we see concrete results? CyCorp spokesmen say they are confident they will be able to build a ‘usable’ general purpose intelligence using their language recognition engine by year 2020, while others expect it to happen before year 2050. It would be hard, or rather impossible, to determine who (if any) might be right right, but what seems certain is that the AI industry remains far too fragmented. We are still missing a centralized coordinator with the necessary resources (think Google) that might be able to integrate the varied and highly diversified technologies of today into a single creature. Right now, this seems the only possible way to meaningfully accelerate progress within this industry.

Post Navigation