ETRI, AI system 'Art brain K'
Korea Electronics and Telecommunications Research Institute (TRI, President Kim, Young — Jun) announced that it has developed two days about 5,000,000,000,000,000 operations per second artificial intelligence system capable of utilizing a low power high-performance AI semiconductor chip. AI processor chapter hanging TRI laboratory emphasized that the world's highest performance electrical consumption compared.
This complex and sophisticated needs of large-scale processing operations has increased as the AI is quickly introduced to the society. But mostly it used in conventional computers and mobile central processing unit (CPU, AP, etc.) there is a limit that is optimized for the simple calculation. The graphics processing unit (GPU) has been widely utilized, but will be structurally occurs AI does not optimize for the data calculation processing delay and power consumption. Neural network processing unit (a processor that can handle large-scale operations at the same time to imitate the CPU, Neural Processing Unit, the brain neural networks optimized for AI algorithm calculation) based semiconductor AI is why emerging as the next generation AI brain.
TRI has successfully developed last year Bio-based AI semiconductor chips' AB9 (Aldebaran 9), followed by a year to public AB9-based boards and artificial intelligence systems. Wihaeseoda to full-scale utilized in the autonomous car, cloud, data center, people, high-performance servers that provide things such as AI, voice recognition application services.
CPU board 'Brains' by the researchers developed a low power consumption, small volume eumyeonseodo have accomplished a unique design based on the AB9. Interface for fast artificial intelligence algorithm input data memory and data transfer speed, which can store up to 16 GB for treatment was also applied.
In order to implement a high-performance, high-efficiency server that is key to the integration CPU board to the maximum the researchers said. GPU boards that utilize a lot of the current AI algorithm processing accelerators are also high volume cursor can not be mounted on one server node 6-7 out of power. Because the structural limits of the GPU. That is, GPU is present such as AI algorithm, because the processor releases a process aimed AI calculation process unnecessary functions, and power consumption on. GPU is a processor optimized for AI Artificial Intelligence algorithm acceleration will not.
CPU the board is 'AB9' built-in is capable of up to 20 each mounted on a single server node. While improving the space, power efficiency compared to the existing system and lowered the price. AB9 the coin size throwing small area 40 per second query operation performance of 17 mm x 23 mm is because even a very low power consumption level to 15W.
Based on this, TRI has built up an eight server nodes created an artificial intelligence system 'Brain Art -K (ArtBrain-K)' consisting of a server rack (Rack Server) mode. A development system for maximum 5 petals (Petals) performance. A count of about 5 times chronic operations available to the server per second. GPU-based server compared to a conventional artificial intelligence power efficiency of about four times the computational performance and seven times.
If Art Brain is applied to a data center, the processing capacity and speed are significantly improved. Thus, transformer (AI algorithm implementation requires huge computation of the technologies to collectively function as in an integrated neural network without having to drive the individual neural networks. Typically, 'GPT-3' is a neural network parameter day 3640 to learn about 175 billion pieces need petals computing) is a family of artificial intelligence, such as super-massive artificial neural network (expected to be utilized where required enormous computing resources to process the data and learning as huge neural Network).
In addition, TRI has unveiled able to easily develop the AI algorithms SW development environment tools 'Apparent' on GitHub (GitHub). There is also related to YouTube. If you are in search TRI ABK CPU. Provide the basic structures and algorithms, simulation, and optimization tools needed for programming in the form of a library was configured to be easy to try to use those unfamiliar with the programming language.
TRI hanging holds artificial intelligence lab chapter processor's non-memory semiconductor industry sources described as AI Semiconductor own development. AI developed by NPU Board and Server System and related SW equipped with a semiconductor succeeded in raising the competitiveness of the technology we have developed, he said.
Currently, this technology is to transfer the technology to semiconductor companies, hardware companies, and AI is automatically applied at the airport immigration system has been used for facial recognition security and immigration. Future researchers will build the servers and systems with higher performance and more sophistication to this technology. Proceeds support for localization of components AI fields that apply deep learning. The study was conducted as part of a project of Information and Communication Technology, specializing in Artificial Intelligence Laboratory processor. The researchers added that for high performance at low power technologies, including technologies to reduce memory latency, patent applications and registered 32 cases, five cases related articles and got 4 cases of technology transfer outcomes.
Comments
Post a Comment