How to Power Our Mobile AI Agent with NVIDIA Tech?

Hello everyone! We just got into the NVIDIA Inception Program, and have a big question.

We’re building an AI agent for a mobile app that handles STT and TTS locally and leverages fine-tuned LLMs via APIs. We’re exploring various tech options but want to prioritize Nvidia technologies. What’s the best way to utilize Nvidia’s tools and platforms for this?

We’d love to hear actionable, technical guidance to help us kickstart development with NVIDIA’s stack!

Thank you very much for your support in advance!

Hi there @umut2, welcome to the NVIDIA developer forums and congratulations on making it to the Inception program!

The nature of the forums is for the most part discussion between developers and NVIDIAns regarding specific issues people run into or to general guidance on topics. For you rneeds most likely the AI category will hold the most useful information.

Detailed technical guidance comes through our DLI courses or as part of examples in our NIM offerings. For the latter you should start with Introduction — NVIDIA NIM for Large Language Models (LLMs) even though you might eventually want local inference.

But most of all you should utilize the Inception portal that you should have received access to. You should be able to get guidance to get started through that if I am not mistaken.

Thanks!

1 Like

Thank you very much Markus for your response and suggestions - very helpful. We will check out the courses right away for sure!

We are told to visit the NVIDIA Developer Forum for our technical questions and include the “inception” tag in our queries. We do have access to courses, so we will definitely be utilizing them.

BTW, I thought I selected the AI category, bur somehow I selected the wrong category I guess - my bad. Thanks again!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.