About AI

When you think of artificial intelligence, or AI, what do you imagine?

Maybe you think of a computer that can think, talk, and move like a living human being. That kind of thing is off in the far future, right?

Well it might be closer than you think. Computers can already do all of these things, sometimes less efficiently than a human can, but they are steadily making swift progress.

One great example of this would be GPT-3 or the third generation Generative Pre-trained Transformer.

GPT-3 is a program that specializes in text comprehension. GPT-3 was Fed nearly half a trillion text documents in many different languages ranging from books to wikipedia articles all to get a grasp on the way our language works.

It’s because of this that GPT-3 is able to answer questions, summarize texts, and do just about anything else related to language.

However, you might be wondering just how we can put this to use?

One of the biggest hurdles between computers and people is communication. It is extremely difficult to get a computer to do what you want it to, even today with all of the advancements made in computing technology. One must still spend months learning how to program and even longer to actually plan, create, and test your programs.

But, what if the computer was able to understand what the user wanted and perform that action on its own? This would allow even those without much knowledge on computers to automate tedious work and allow anyone to create a website from scratch.

In fact, many developers with this in mind are currently developing programs or are waiting for the technology to improve.

That improvement is appearing near, as the team behind GPT-3 is currently working on GPT-4, a newer model that is expected to be 500 times the size of its predecessor.

Text isn’t the only area computer scientists have developed AI for. Enter Atlas, a humanlike robot capable of walking, running, jumping, and more complex movements like vaulting over a balance beam and performing backflips.

This might not sound all too impressive at first glance. After all, humans can do these kinds of things on a daily basis, so how hard could it be to teach a robot to do it? Very difficult, it turns out. Human beings learn how to do things like walk after months of building up and learning. We instinctively correct our balance and foot placement when walking on different surfaces, something robots struggle with.

With atlas, every skill that humans learn subconsciously needs to be meticulously designed and adjusted after trial and error.

One of the strategies the designers might have used was giving Atlas a few templates for important movements like walking and jumping which they would modify to keep its balance and match the terrain.

Speaking of terrain, how does Atlas even find out what its surroundings look like? Atlas takes visual input through both RGB (visible imaging sensor) cameras and depth sensors to create something called a ‘point cloud’, an estimate of the dimensions and distance of the object around the robot.

The uses for this robot are a bit more clear, you could use them to rescue people from dangerous places and deliver things like packages and medicine. This development could cut down on the amount of physical labor required in our workplaces and reduce the amount of people put at risk in jobs like this.

From a computer program capable of holding conversation to a robot that can walk on two legs, artificial intelligence has progressed way more than you might think.