Inside the iG3 Edge Network Architecture: A High-Level Overview
The iG3 Edge Network is a cutting edge platform that brings cloud capabilities to edge devices. In simple terms, it allows builders to deploy and run applications (like AI agents, edge services and other services) on distributed edge nodes.
The purpose of iG3 is to leverage edge computing for faster processing, improved privacy, and better reliability by running computations closer to where data is generated or needed. This high-level overview will walk you through the core architecture of iG3 in a conversational manner, explaining how all the pieces fit together.
Architecture Overview
At a glance, the iG3 Edge Network consists of a lightweight Kubernetes cluster (K3S) managed with KubeSphere, a CI/CD pipeline for getting code deployed (using Git, Docker, and Helm), a RabbitMQ message broker for handling requests, and the external clients and applications that interact with the system.
All these components are connected via a secure mesh network and carefully orchestrated to work together.

In the diagram above, you can see how code flows from the developer’s repository into the cluster, and how client requests flow through RabbitMQ to the edge cluster and back. Now, let’s break down each of these core components and then walk through the end-to-end workflow.
Core Components of the iG3 Edge Network
K3S Cluster with KubeSphere (Edge Orchestration)
At the heart of iG3 is a K3S cluster, which is a lightweight Kubernetes distribution ideal for edge environments. K3S provides the Kubernetes essentials but with a smaller footprint, making it perfect for resource-constrained or remote devices.
The cluster can consist of one or multiple nodes (and even multiple clusters in different locations), all connected through a mesh network so they can communicate over secure local IPs as if on the same LAN.
Within each cluster, there is a master node and one or more worker nodes. The master runs the Kubernetes control plane (and in this case, hosts KubeSphere as well) and is mainly responsible for cluster management and scheduling. The worker nodes are where the actual application services run.
KubeSphere is layered on top of Kubernetes as a friendly management platform, it provides a web UI and additional tools that make it easier to monitor the cluster, manage deployments, and even handle multi-cluster setups.
In short, K3S + KubeSphere gives iG3 a robust yet edge-optimized “mini-cloud” infrastructure where your services will live.
Git, Docker, and Helm — The CI/CD Pipeline
To get code running on those edge clusters, iG3 employs a continuous integration and deployment pipeline using familiar tools:
- Git (Code Repository): Developers start by writing code and pushing it to a Git repository. This is the source of truth for application code.
- Docker (Containerization): Once code is in Git, an automated build process packages the application into a Docker image. Containerizing the app ensures it can run reliably on any node in the cluster with all its dependencies.
- Container Registry: The Docker image is then pushed to a container registry (which could be a public registry or a private one). This registry stores versioned images and makes them available for deployment.
- Helm (Deployment Charts): iG3 uses Helm charts to define how the application should be deployed on the K3S cluster (for example, how many replicas, what environment variables, etc.). The Helm chart references the Docker image from the registry. The charts are stored in a Helm repository so they can be versioned and pulled by the cluster.
- KubeSphere DevOps: With KubeSphere in the mix, the platform likely has CI/CD pipelines set up to automatically build the Docker image and deploy the Helm chart whenever new code is pushed. This means from a developer’s perspective, you can
git pushyour update and the pipeline will handle building the image and updating the application on the edge cluster.
This pipeline ensures that deploying a service to the edge is as streamlined as deploying to any cloud, you write code, build a container, and let the orchestration handle the rest.
By the time the image and Helm chart are prepared, the K3S cluster (via KubeSphere) will pull the latest chart and spin up your application on one of the worker nodes. The master node coordinates this, but the workload runs on workers, keeping the master free for control duties.
RabbitMQ (Messaging and Task Queue)
One of the standout components of the iG3 architecture is the use of RabbitMQ, a robust messaging broker.
RabbitMQ is central to how external systems interact with services running in the edge cluster.
Instead of exposing every microservice in the cluster via a public API or IP (which could be complex and insecure, especially since edge nodes might be behind NAT or on dynamic networks), iG3 uses RabbitMQ as a bridge between the outside world and the cluster’s services.
Here’s how it works:
Each service running on the cluster that needs to handle external requests will subscribe to a RabbitMQ queue. The RabbitMQ server is accessible on a public/accessible IP (for example, it could be a cloud-hosted RabbitMQ endpoint, possibly with one instance per region for low latency). The edge workers connect out to RabbitMQ and listen on specific queues for tasks. Because the connection is outgoing from the cluster to RabbitMQ, the workers do not need to expose any public IP or port themselves. They remain safely behind the mesh network, yet they can receive work from external sources via the queue.
RabbitMQ enables an asynchronous, decoupled communication pattern:
- External apps don’t call the edge service directly; instead, they post a message (a task request) to RabbitMQ.
- One of the subscribed worker services in the cluster picks up the message from the queue, processes the task, and when done, sends back a response message (perhaps on a separate response queue or by updating the message status).
- The external app that requested the task can then consume the response from RabbitMQ when it’s ready.
This setup is powerful because it naturally buffers and load-balances tasks. If many requests come in, they queue up and are processed as workers are available. It also means the external caller and the internal worker don’t have to be directly connected — they just need to agree on using the queue. In practice, this can be near-real-time (the external app can block waiting for the response message, giving a synchronous feel) or fully asynchronous (where the response might come later and be handled by a callback or separate process). RabbitMQ’s role is essentially to connect the dots between the cluster and clients in a reliable way, without exposing the cluster’s network details.
Third-Party Apps and Clients
On the far right of the architecture diagram are the third-party apps and clients.
These represent the external world that wants to make use of the edge network’s computing power. A “client” could be a mobile app, a web application in a browser, an IoT device, basically an end-user or end-device that needs some work done (for example, an AI inference on some data, image processing, etc.).
The “third-party app” is typically the server or service that the client interacts with directly. Think of it as the application or backend owned by a company or developer that has integrated with iG3.
For instance, imagine a mobile app that lets users upload a photo to get analyzed by an AI. The mobile app (client) sends the photo to the company’s server (third-party app). Now, instead of that server doing all the heavy lifting, it hands off the image to the iG3 Edge Network for analysis.
How does it hand it off?
Through RabbitMQ — the third-party app publishes the image (or a reference to it) as a task message to the appropriate queue. It then waits (perhaps listening on a reply queue) for the result. When an edge worker in the iG3 cluster processes the image and returns the analysis result to RabbitMQ, the third-party server receives it and then sends the result back to the mobile client. From the end-user’s perspective, it’s seamless — they got their result quickly, but behind the scenes an edge network did the work asynchronously via messaging.
In summary, third-party apps are the bridge between end-users and the iG3 network. They handle client requests and responses, and they use the RabbitMQ interface to delegate tasks to the edge cluster. This separation of concerns keeps clients oblivious to the complexity of the edge network while allowing developers to tap into powerful edge computing resources in a secure and decoupled way.
End-to-End Workflow: From Code to Client
Now that we have introduced the components, let’s walk through how everything works together step by step.
There are two main parts to consider:
(1) the deployment pipeline that takes new code from a developer and deploys it onto the edge cluster, and
(2) the request/response cycle where an end-user’s request gets processed by the edge network. Here’s an end-to-end scenario tying it all together:
Deployment Pipeline (Code → Edge):
- Developer pushes code to Git: A developer makes changes to the application’s code and pushes to the Git repository (for example, on GitHub).
- CI builds a Docker image: A continuous integration process (triggered by the git push) builds the application into a Docker image. This ensures the app runs with all its dependencies packaged.
- Image pushed to Registry: The newly built Docker image (tagged with a version) is uploaded to the container registry.
- Deploy via Helm to K3S: The Helm chart for this application is updated to use the new image version, and a deployment is initiated (either automatically via a CI/CD pipeline or manually through KubeSphere). KubeSphere (or Kubernetes directly) pulls the updated chart from the Helm repo and schedules the application to run on a K3S worker node. The service is now live on the edge cluster, running the latest code. (The master node orchestrates this, but the actual pod runs on a worker node.)
Request Handling (Edge ←→ Client):
- Client makes a request: An end-user (client) triggers an action that requires the edge service. For example, a user on a smartphone app requests an AI computation. This request hits the third-party application’s server.
- Third-party app enqueues a task: The third-party app, which knows it should use iG3 for this task, packages the request (e.g. the data needed for processing) into a message and publishes it to a RabbitMQ queue designated for that type of task. Let’s call it the “task queue.”
- Edge worker processes the task: On the iG3 side, one of the running service instances on the K3S worker node is subscribed to the task queue. It receives the message almost immediately after it’s published. The service then processes the task — doing whatever computation or work is required. During this time, the third-party app might be waiting for the result (if it’s a synchronous call) or might have other things to do if it’s async.
- Result returned via RabbitMQ: Once the edge service finishes processing, it sends the result back. This could be done by publishing a message to a designated response queue (perhaps the third-party app provided a reply-to queue in the initial message, or there’s a predefined queue for results). RabbitMQ delivers this message to the third-party application.
- Third-party app responds to client: The third-party app picks up the result from RabbitMQ, then continues the flow back to the end-user. For a synchronous scenario, this means the app now has the data needed to respond to the waiting client request — it sends the response (e.g. the AI analysis result) back to the mobile app or web client. From the client’s perspective, they made a request and got a response with the processed result. In an asynchronous scenario, the client might be notified or pull the result later, but the mechanism is similar — the heavy lifting was done by the edge network and delivered via the queue.
Throughout this whole sequence, notice a few important things:
- The edge cluster was updated with new code seamlessly via the CI/CD pipeline, ensuring that when the client’s request came in, the latest logic was running.
- Neither the client nor the third-party server needed to know anything about the internal network details of the edge cluster. They just interacted with RabbitMQ as the interface.
- The use of RabbitMQ decouples the timing — if the edge service was busy or took a few seconds, the request is naturally queued and the client can be handled asynchronously or with a slight delay, without anything breaking. It provides resilience and buffering.
Key Design Choices and Benefits
The iG3 architecture was deliberately designed with certain principles in mind to suit the edge computing environment. Let’s highlight a few key design choices and why they are important:
- Mesh Network for Connectivity: All K3S clusters in iG3 are connected via a secure mesh network. This essentially means that whether there is one cluster or many clusters, they can communicate with each other over encrypted, private channels using local IP addresses. The mesh network abstracts away the fact that edge nodes might be in different physical locations or behind firewalls/NAT. By having a mesh, the cluster nodes (and even multiple clusters) form a virtual intranet. This design choice makes communication between nodes secure and efficient, without exposing internal traffic to the public internet. It also allows iG3 to easily scale to multiple clusters in different regions — those clusters can cooperate or be managed together via KubeSphere because the mesh makes them feel like part of one network. In short, the mesh network provides secure, reliable connectivity across distributed edge devices.
- Private vs. Public IP Handling: In an edge network, devices often don’t have public IP addresses; they might be on home or enterprise networks. Exposing each device or service to the internet would be a networking nightmare and a security risk. iG3’s architecture smartly avoids this by not requiring any inbound public connections to the edge nodes. The workers initiate outgoing connections to RabbitMQ (which does have a reachable address). All service requests are funneled through that single egress point. This means each edge node only needs a private IP (for within the mesh) and internet access to reach the RabbitMQ server — no need to fiddle with port forwarding or DNS for each service. By handling public interaction via RabbitMQ, iG3 effectively keeps the edge nodes dark to the outside world, greatly reducing the attack surface. Meanwhile, within the mesh, nodes freely talk to each other on private IPs for any coordination needed. This separation of private internal network and controlled public interface (the queue) is a key security and simplicity win.
- Asynchronous Messaging via RabbitMQ: The choice of using an asynchronous task queue (RabbitMQ) as the interface between clients and the edge services brings several benefits. It decouples the producer (third-party app/client) from the consumer (edge worker). They don’t have to be online at exactly the same time or know each other’s addresses; they just need to trust the queue. This makes the system more resilient — if an edge worker is briefly down or busy, messages just wait in the queue until it’s ready. It also enables scalability and load balancing: multiple workers can consume from the same queue, so you can scale out the number of edge nodes or instances to meet demand, and RabbitMQ will distribute tasks among them. Another advantage is flexibility in workflow — the third-party app could choose to handle the response asynchronously (letting it do other work or handle more requests while waiting for results), which is particularly useful for long-running AI tasks. Overall, the asynchronous queue design makes the system more robust to network issues and variable workloads, which is crucial in edge environments where connectivity can be spotty and processing loads can spike.
- Lightweight Kubernetes (K3S) with Helm: By using K3S, the iG3 network can run Kubernetes in environments that would be too resource-constrained for a full-fledged Kubernetes cluster. This is a nod to the fact that edge devices might not have tons of CPU/RAM like cloud servers. K3S trims the fat but still lets us use the rich Kubernetes ecosystem (like Helm charts for deployment, and KubeSphere for management). This design choice ensures that developers have a familiar deployment model (containers, pods, services) even on the edge. It also means iG3 can support modern DevOps practices (CI/CD, rolling updates, etc.) on edge hardware. The use of Helm charts for deployment means that the exact same application definitions can be reused across clusters, and updating a service is as simple as bumping a version and running a Helm upgrade — the system takes care of rolling it out to the edge nodes.
Each of these design decisions mesh networking, avoiding public IP exposure, async queues, and leveraging lightweight Kubernetes that works in concert to address the unique challenges of edge computing (like network unreliability, security, and scalability) while keeping the developer experience as smooth as possible.
What’s Next: Decentralizing iG3 with peaq Blockchain
The iG3 Edge Network already provides a powerful distributed infrastructure, but the team has plans to take it a step further. What’s next? Moving towards an even more decentralized model by integrating with the peaq blockchain. This upcoming enhancement will transform how iG3 coordinates and rewards its network of edge devices, bringing in blockchain benefits to the architecture.
So, what does integrating with peaq blockchain mean for iG3 and why do it? In essence, peaq is a blockchain platform designed for decentralized physical infrastructure networks (DePINs) and machine economy use cases. By migrating parts of iG3’s system onto peaq, the network stands to gain several improvements:
- Decentralized Task Coordination: Currently, RabbitMQ (while distributed in use) is still a centrally hosted component. In the future, tasks and their assignment could be managed in a decentralized way on the blockchain — think of it like a decentralized queue or marketplace for tasks. This removes any single point of failure or control. Any edge node could fetch tasks from this decentralized system, and multiple organizations can trust the task assignment because the blockchain provides transparency.
- Trust and Transparency: With blockchain integration, the operations of the network become more transparent and trustless. For example, every edge device (node) could have a decentralized identity registered on-chain (using a DID). This identity can verify the node’s legitimacy and capabilities. When a device completes a task, it could record proof of that work (and perhaps proof of uptime or performance) on the blockchain. This creates an immutable record of who did what.
- Reward System via Tokens: peaq integration will enable a token-based reward mechanism. Edge node operators (people or organizations running the iG3 edge devices) can be rewarded with iG3 tokens automatically for their contributions (CPU/GPU time, tasks completed, availability, etc.), with the blockchain handling the distribution fairly and transparently. Smart contracts on peaq can calculate rewards based on metrics (like number of tasks completed, latency, uptime proofs) and issue payouts without human intervention. This incentivizes more participants to join and contribute compute power to the network, accelerating growth.
- Device Licensing and Governance: Another enhancement with blockchain is the ability to manage device licenses or tiers via NFTs or smart contracts. Each device could be issued a license token when it joins the network, which validates it on the blockchain. This makes the network more open and new devices can join in a trustless way by obtaining a license token. Governance could also be decentralized; for instance, the community might vote (via a DAO mechanism) on protocol upgrades or reward parameters in the future, using the token. This means the ecosystem could evolve with input from its participants, not just the core team.
Wrapping Up
The iG3 Edge Network combines the best of modern cloud-native tooling (Kubernetes, containers, CI/CD) with an edge-first mindset (lightweight clusters, mesh networking, and queued messaging).
The result is a developer-friendly platform where you can deploy code to distributed edge devices and have clients use those services without worrying about the complexities of networking or infrastructure.
We started with a Git commit and ended with a client getting a response — and we saw how each piece, from K3S and KubeSphere to RabbitMQ, plays a role in making that happen smoothly. By embracing asynchronous communication and keeping the architecture modular, iG3 achieves a robust, scalable system suitable for real-time AI and other demanding applications at the edge.
As the network evolves with blockchain integration (thanks to peaq), it’s poised to become even more decentralized and community-driven.
For developers, this is an exciting trajectory: it means the infrastructure you rely on becomes more resilient and autonomous over time. Whether you’re deploying an AI microservice to an iG3 edge cluster or queuing up tasks for processing, you can do so with confidence that the architecture is built to handle the challenges of edge computing and it’s only getting better with the forthcoming decentralized enhancements.
Stay tuned for more updates on iG3’s journey into decentralization. The edge cloud is here, and with iG3, it’s becoming smarter and more accessible for everyone.
If you like the article, please give the author a thumbs up.