In a collaborative effort with Microsoft, NVIDIA has announced the availability of NVIDIA-optimized Phi-3 models. Developers now have the opportunity to experiment with Phi-3 Mini, which offers the expansive 128K context window, accessible through ai.nvidia.com.

Packaged as an NVIDIA NIM (NVIDIA Inference Microservice), Phi-3 Mini comes equipped with a standardized application programming interface, facilitating seamless deployment across various platforms.

Context window up to 128K tokens

This marks a significant milestone as Phi-3 becomes the pioneer in its category, boasting support for a remarkable context window spanning up to 128K tokens. These Phi-3 models underwent rigorous training over a period of seven days, leveraging the computational power of 512 NVIDIA H100 Tensor Core GPUs, culminating in a staggering 3.8 billion parameters.

This latest announcement further solidifies NVIDIA’s commitment to advancing AI technology, following its recent endorsement of Google Gemma and Meta Llama 3. By harnessing the power of NVIDIA-accelerated computing alongside open language models, developers are empowered to effortlessly create and deploy cutting-edge AI solutions, ushering in a new era of innovation and efficiency.

Leave a Reply

Your email address will not be published