Renesas Reality AI Tool®, Technical Q&A

Renesas Reality AI Tools® enable engineers to generate and build TinyML/Edge AI models based on advanced signal processing. Users can automatically explore sensor data and generate optimized models. The Reality AI Tools include a series of analysis functions, which can be used to find the best sensor or the best sensor combination, sensor placement, and automatically generate component specifications. It also includes fully interpretable time-domain/frequency-domain model functions, as well as optimized code for Arm® Cortex® M/A/R execution.

Related products:


The following is the Q&A for Reality AI Tools® from Renesas:

Q1: Does Reality AI Tools® support users to customize AI models and deploy them quickly?

A: The models of eAI are customized and trained by customers, but RAI models cannot be customized.

Q2: Where is the model training of Reality AI carried out?

A: The model training for Reality AI is generally carried out in the cloud. If using Edge AI or DRP, developers can choose to train on a local PC or in the cloud by themselves. The cloud platform provides corresponding algorithms for MCU AI training, and MPU AI training can be carried out on the PC side, cloud side, or server side as needed.

Q3: If the AI model needs to be fine - tuned or updated after being deployed to the end device, what corresponding tools and processes does Reality AI Tools® have?

A: e2studio can be directly connected to the RAI platform, and the model can be directly downloaded to e2studio.

Q4: Can Reality AI Tools® directly perform model training and fine-tuning on Renesas MCUs/MPUs?

A: RAI cannot fine-tune models on MCUs. However, there are other tools to meet this need. If necessary, you can contact Renesas for help.

Q5: How does the Reality AI real-time analysis platform facilitate efficient AI development and debugging on Renesas MCUs/MPUs?

A: Running an RAI model on an MCU only requires calling one function, which is simple and efficient. Moreover, the model has visual decision-making motivation after feature extraction, and the platform also provides performance information such as RAM, FLASH, and computational amount.

Q6: How can I conduct visual debugging and performance analysis of AI models in Reality AI Tools®?

A: The model has visual decision-making motivation after feature extraction. In terms of performance analysis, the platform will provide information on RAM/FLASH/computational amount.

Q7: Which mainstream AI frameworks does Reality AI Tools® support for model conversion?

A: It supports PyTorch, Keras, TensorFlow, and TensorFlow Lite.

Q8: Does Reality AI Tools® support automatic code generation to simplify MCU/MPU programming?

A: After the model is exported, running it only requires calling one function, so the programming is relatively simple. However, since many sensors are external, code related to sensors cannot be generated.

Q9: Which development environments does Reality AI Tools® support?

A: Currently, it mainly supports CS+ and e2studio. The support for compilers such as IAR and Keil is being gradually promoted.

Q10: Can Reality AI Tools® cover the entire product development life cycle?

A: Yes, it can.

Q11: What specific development functions does Reality AI Tools® provide to accelerate the AI application development cycle?

A: It covers data review, model training, model deployment, and model optimization (model channel, size optimization, sampling rate optimization).

Q12: Does Reality AI Tools® charge for use?

A: Currently, the tool is paid, and the fee depends on the specific situation.

Q13: Is Renesas’ edge-side AI trained in the cloud and then deployed locally?

A: Yes. The MCU can be trained in the cloud, and the MPU can be trained on the cloud, PC side, or server side according to its own needs. The trained model is then deployed to the edge-side.

Q14: Is the solution of Reality AI Tools® applicable to small MCUs?

A: Yes, it is applicable. It can run on the RL78 16-bit MCU, and the trained model only occupies very few resources.