Can you believe it? A drone capable of accurately identifying targets suddenly becomes “dumb” when flying into deep mountain valleys—because without internet, the “super brain” in the cloud can’t send commands, making it unable to even avoid a small branch. But another identical drone, even if it falls into a signal dead zone, can deftly navigate around obstacles and land steadily.
The difference lies in this: the latter carries its “brain” onboard.
In the second episode of the 「Redefining the Present with AI」 series, DFRobot’s instructor Rockets Xia will guide everyone to understand the differences between “centralized intelligence” and “distributed intelligence,” along with a series of interesting projects.
-Featured Speaker:-
Rockets Xia(夏青), Senior Engineer at DFRobot and Co-founder of Maker Space Mushroom Cloud
Rockets Xia is an active figure in global maker communities. Since 2008, he has promoted maker culture and the growth of China’s maker movement. In 2010, he co-created China’s first maker space, XinCheJian, with “Godfather of Chinese Makers” David Li. In 2013, with support from DFRobot and Pujiang Group, he established Maker Space Mushroom Cloud. As a co-founder of Mushroom Cloud, he frequently encourages and advances community maker projects. As a senior engineer at DFRobot, he actively promotes the adoption of AI, IoT, and other advanced technologies in maker education.
We often talk about AI these days, but most of the time we think of cloud data centers—like search engines and large language models, which rely entirely on distant server clusters for their “centralized intelligence.” But once disconnected from the network, this intelligence becomes “armchair strategy.” Today, we’ll discuss “distributed intelligence” (also called edge intelligence), which is pulling AI from the cloud right to our side.

(Image credit: DFRobot)
When Intelligence Moves from the “Cloud” to “Beside Us”
Centralized intelligence and distributed intelligence are like two “assistants” with completely different personalities:
-
Centralized intelligence is like a “remote expert”—knows everything but relies on the network, with a delayed response (e.g., internet-dependent voice assistants);
-
Distributed intelligence acts like a “personal bodyguard,” capable of functioning without internet and responding in milliseconds (like a smartwatch that recognizes gestures offline).
Why is distributed intelligence so “capable”? The key lies in shrinking AI models—this is the magic of TinyML technology. It compresses AI models that originally required servers to run, making them operable on microcontrollers with power consumption so low that a coin cell battery can last for months.
Equipping edge devices with a “smart brain”
To make devices locally intelligent, two essential components are needed: a “brain” (main control chip) and “senses” (sensors).
First, the “brain.” Edge device chips can’t guzzle power like server chips—they must balance computing power, energy consumption, and size. Take DFRobot’s UNIHIKER K10 learning board, for example, which packs a “small but mighty” core—the Espressif ESP32-S3 chip. This chip is an “all-rounder,” capable of both data processing and running AI models, with ultra-low power consumption. Even better, the UNIHIKER K10 integrates a screen, sensors, and expansion ports right out of the box, making TinyML training accessible even for beginners.
Its “sibling,” the UNIHIKER M10, focuses more on Python programming. While their configurations differ, both embody the same philosophy: bringing AI from the “lab” to the “palm of your hand.”
Now, the “senses.” Without sensors, even the smartest AI is “blind and deaf.” DFRobot’s sensor lineup includes some real “experts”:
-
Gravity Offline Voice Recognition Module: No internet needed—just say “turn on the light” to trigger the switch. It “memorizes” your command features in advance and responds instantly to matching sounds, perfect for smart homes and industrial control;
-
SEN0305 HuskyLens Vision Module: This is no ordinary camera. Press a button to let it “take a look” at your face, and it remembers you—recognizing you instantly next time, all processed locally for maximum privacy.
-
Gravity Voice Synthesis Module: Converts text into natural speech. Paired with a voice recognition module, it enables devices to “converse” with you—like audibly announcing recognized object names.
3 Mini Projects to Showcase the Practicality of Local AI
Actions speak louder than words. These small projects built with DFRobot hardware will instantly demonstrate the charm of local AI:
-
Offline Voice-Controlled Device: Using an Arduino UNO with an offline voice module, commands like “turn on the fan” or “turn off the light” execute instantly without internet, with near-zero latency.
-
Stranger Detection Alarm: Connect a camera and buzzer to Arduino, let HuskyLens “memorize” family faces, and trigger red lights and alarms when strangers appear—perfect for home security.
-
Gesture-Controlled “Magic Wand”: Mount an accelerometer on a K10 UNIHIKER board—draw a triangle to turn LED strips green, or wave horizontally to stop a magic hat—all powered by local AI’s real-time motion analysis.
Behind these projects lies TinyML’s “magic”: Tools like Edge Impulse let anyone train models and deploy them to microcontrollers, no complex coding required.
AI is evolving from cloud-dependent “centralized intelligence” to ubiquitous “distributed intelligence,” becoming more accessible, secure, and adaptable. DFRobot’s hardware bridges the gap for beginners—whether students, makers, or developers, everyone can touch the pulse of intelligence firsthand.
This is Episode 2 of our “AI Reshapes the World” series. Next time, we’ll explore the wonders of “multimodal interaction” with voice and image. Stay tuned for more! ~
Where do you think local AI fits best? Join the discussion below!
Related product information:
DFR0992-EN UNIHIKER-K10
The UNIHIKER-K10 is a learning board specifically developed for programming education, IoT, and AI project teaching in information technology curricula. It integrates a camera, LCD color screen, microphone, speaker, WiFi/Bluetooth modules, RGB indicators, multiple sensors, and expansion interfaces – enabling sensor control, IoT applications, image detection, speech recognition, and speech synthesis projects without additional equipment.
DFRobot official website development resources link
DigiKey online purchase link
DigiKey Part Number: 1738-DFR0992-EN-ND
DFR0706-EN UNIHIKER-M10
The UNIHIKER-M10 is a highly integrated domestically-produced educational open-source hardware (with independent intellectual property), designed specifically for K12 teachers and students, meeting new curriculum standards for interdisciplinary teaching in information technology, physics, biology, and other subjects. Integrated single-board computer (4-core CPU/512MB RAM/16GB storage), Linux system, complete Python environment, pre-installed with common Python libraries, and comes with a 2.8-inch color touchscreen and rich sensors. Just two steps to start the Python teaching platform.
DFRobot official website development resources link
DigiKey online purchase link
DigiKey Part Number: 1738-DFR0706-EN-ND
DFR0975-U High-performance main control based on ESP32-S3, suitable for AIOT, image acquisition, and image recognition projects
FireBeetle 2 ESP32-S3-U is a main control board designed based on the ESP32-S3-WROOM-1U-N16R8 module. The ESP32-S3-WROOM-1U-N16R8 module features 16MB Flash and 8MB PSRAM, allowing for more code and data storage. The ESP32-S3 chip on the module has powerful neural network computing and signal processing capabilities, making it suitable for projects like image recognition and speech recognition.
DFRobot official website development resources link
DigiKey online purchase link
DigiKey Part Number: 1738-DFR0975-U-ND
DFR0100 Maker education starter learning kit, suitable for Arduino UNO R3 development board and electronics beginners
The Arduino starter kit is a tool package specifically designed for beginners in electronic circuit building and programming logic. It covers course content ranging from basic LED control to complex environmental sensing, monitoring, and actuator applications.
DFRobot official website development resources link
DigiKey online purchase link
DigiKey Part Number: DFR0100-ND
SEN0539-EN Gravity: Offline speech recognition module (I2C & UART)
This module adopts a brand-new offline speech recognition chip. It comes with 135 commonly used fixed command entries and adds a command self-learning function. Self-learned commands can be not just a voice segment but also a whistle, a snap, or a cat’s meow, supporting up to 17 self-learned commands. Dual-microphone design provides better noise resistance and longer recognition distance. The module comes with a built-in speaker and an external speaker interface, providing real-time voice feedback of recognition results. The module supports both I2C and UART communication methods, features a Gravity interface, and is compatible with controllers such as Arduino Uno, Arduino Leonardo, Arduino MEGA, FireBeetle series, Raspberry Pi, ESP32, and more.
DFRobot official website development resources link
DigiKey online purchase link
DigiKey Part Number: 1738-SEN0539-EN-ND
SEN0305 Gravity: HuskyLens AI Vision Sensor
HuskyLens is an easy-to-use AI vision sensor with six built-in functions: face recognition, object tracking, object recognition, line tracking, color recognition, and tag recognition. AI training can be completed with just one button, eliminating tedious training and complex visual algorithms, allowing you to focus more on project conception and implementation.
DFRobot official website development resources link
DigiKey online purchase link
DigiKey Part Number: 1738-SEN0305-ND
DFR0760 Gravity: Chinese-English Text-to-Speech Module V2.0
Let sound add a unique touch to your project! Connect the text-to-speech module and add a few simple lines of code to make your project speak. Whether it’s Chinese or English, it’s “so easy” for the text-to-speech module. It can announce the current time, report environmental data, and even enable voice dialogue when combined with a speech recognition module! The module supports both I2C and UART communication methods, features a Gravity interface, and is compatible with most controllers. The module already includes a built-in speaker, so no additional speaker is needed.
DFRobot official website development resources link
DigiKey online purchase link
DigiKey Part Number: 1738-DFR0760-ND
Editor’s Note
As introduced in the video and article, at the price of “a single teaching board,” DFRobot’s UNIHIKER packs “computing power + sensing + software stack” into one solution. With a cost of just hundreds of RMB, it addresses three major pain points in edge AI development: difficult environment setup, cumbersome model deployment, and hardware fragmentation. This makes it an efficient choice for rapid prototyping and small-batch deployment of distributed intelligent systems—the core value of UNIHIKER! Which development boards do you use for edge AI systems? What experiences or questions do you have during development? Feel free to leave a comment and share with the DigiKey community!





