Litepaper

1. Introduction

This yellow paper presents a comprehensive technical overview of Ermis’ communication and streaming platform. Our system leverages a hybrid architecture that combines the benefits of centralized message brokers with decentralized edge nodes to deliver a scalable, low-latency solution for next-generation communication and media streaming.

In the rapidly evolving landscape of web3 technologies, our platform stands out by addressing critical challenges in distributed communication systems. By utilizing edge computing principles and incorporating blockchain technologies, we offer a unique value proposition that balances performance, security, and decentralization.

The following sections will delve into the technical specifications of our system architecture, detailing the roles of edge nodes and centralized message brokers. We will explore the protocols and mechanisms that enable efficient message routing, stream management, and data integrity verification. Furthermore, this paper will elucidate our approach to scalability, security, and economic incentives that form the backbone of our ecosystem.‍

As we prepare to enter the market with edge node sales, this yellow paper serves as a technical blueprint for developers, investors, and partners to understand the intricate workings of our platform and its potential to reshape the future of decentralized communication.

‍Let's move on to the System Architecture Overview.

2. System Architecture Overview

Ermis is built on a hybrid architecture that combines centralized and decentralized elements to optimize performance, scalability, and reliability. The core components of our system are:

1. Edge Nodes: Distributed computing units that serve as the primary interface for end-users, handling message delivery and media streaming.

2. Centralized Message Brokers: High-performance servers that manage message routing, media streams ingestion, and system coordination.

3. Blockchain Layer: A decentralized ledger that ensures correctness via proof-of-relay, manages identity, and facilitates token-based incentives.

4. Client Applications: User-facing software that interacts with edge nodes to send and receive messages and media streams.

 

The system operates on a multi-tiered structure:

Tier 1: Client Layer - End-user devices running client applications - Local caching and encryption

Tier 2: Edge Node Layer Geographically distributed edge nodes responsible for - Message broadcasting - Media delivery with transcoding and adaptive bitrate streaming

Tier 3: Message Broker Layer Custom zero-copy protocol centralized high-throughput message brokers - Global routing and load balancing - Session management and presence information

Tier 4: Blockchain Layer - Decentralized ledger for proof-of-relay and identity management - Smart contracts for governance and incentive distribution

 

Data flow within the system follows a optimized path:

1. Client applications connect to the nearest edge node.

2. Edge nodes handle local message delivery and streaming when possible.

3. For inter-region communication, edge nodes relay messages through centralized message brokers.

4. Message brokers coordinate global message routing and load balancing.

5. The blockchain layer periodically verifies data integrity and manages identity and incentives.

 

This hybrid architecture allows us to leverage the speed and efficiency of centralized systems for real-time communication while benefiting from the security and transparency of decentralized networks.

3. Scalability and Performance

Ermis is designed with scalability and performance as core principles, leveraging the hybrid architecture to achieve optimal results. The combination of edge nodes and centralized message brokers allows for a highly efficient and scalable system that can handle millions of concurrent users while maintaining low latency.

 ‍

3.1 Horizontal Scalability

The platform achieves horizontal scalability through its distributed edge node network. As user demand grows, new edge nodes can be seamlessly added to the network, increasing the overall capacity of the system. This approach allows for:

  • Geographic scalability: Edge nodes can be deployed in new regions to serve local users, reducing latency and improving performance.

  • Load distribution: The system automatically distributes user connections across available edge nodes, preventing any single point of bottleneck.

 ‍

3.2 High-Performance Message Brokers

At the heart of our system lie the centralized message brokers, engineered for unprecedented throughput and minimal latency. These high-performance servers, meticulously crafted in Rust, leverage cutting-edge techniques to achieve remarkable message processing capabilities:

  • Thread-per-core Architecture: Each message broker utilizes a thread-per-core model, where each CPU core is assigned a dedicated thread. This approach eliminates context switching overhead and maximizes CPU utilization, allowing for efficient parallel processing of messages.

  • Zero-copy Networking: Implementing zero-copy network operations dramatically reduces CPU overhead and memory bandwidth usage. By directly mapping network buffers to user space, we eliminate redundant data copying, significantly boosting throughput and reducing latency.

  • Lock-free Data Structures: Our custom-designed lock-free data structures minimize contention and enable near-linear scaling across CPU cores, ensuring optimal performance even under high load.

  • Adaptive Batch Processing: Intelligent batching of messages optimizes throughput while maintaining low latency, automatically adjusting batch sizes based on current load conditions.

 

These advanced techniques enable our message brokers to achieve extraordinary performance metrics:

  • Message Throughput: Each broker can process over 1 million messages per second on commodity hardware.

  • Latency: Median latency for message routing is under 100 microseconds. Scalability: Near-linear throughput scaling with additional CPU cores, up to hundreds of cores per broker.

  • Concurrency: Ability to handle hundreds of thousands of concurrent connections per broker. By combining these high-performance message brokers with our distributed edge node network, we create a hybrid architecture that excels in both local and global message routing, setting new standards for scalability and performance in web3 communication platforms.

 ‍

3.3 Load Balancing and Traffic Management

To ensure optimal performance across the network, our platform implements advanced load balancing and traffic management techniques:

  • Dynamic routing: Messages are routed through the most efficient path based on real-time network conditions.

  • Rate limiting: Intelligent rate limiting prevents abuse and ensures fair resource allocation among users.

 ‍

3.4 Caching and Edge Computing

Edge nodes play a crucial role in enhancing performance through local caching and edge computing capabilities:

  • Content caching: Frequently accessed content is cached at the edge, reducing latency and backbone traffic.

  • Edge processing: Media transcoding and adaptive bitrate streaming are performed at the edge, optimizing delivery for various network conditions and device capabilities.

  • Local message routing: Edge nodes can directly route messages between local users, bypassing the central brokers for improved latency.

 ‍

3.5 Performance Metrics

Our infrastructure consistently achieves industry-leading performance metrics:

  • Message latency: < 50ms for 99th percentile of messages within the same region.

  • Streaming latency: < 100ms for real-time media streams, fostering closer interactions between broadcasters and their audience.

  • Concurrent users: Ability to scale to hundreds of millions of concurrent users.

  • Throughput: Each message broker capable of processing over 1 million messages per second.

  • Scalability: Linear scaling of capacity with the addition of new edge nodes.

4. Economic Model

4.1 Proof-of-Relay Mechanism

To ensure fair and accurate incentive distribution for node operators, our platform implements a novel proof-of-relay mechanism. This system guarantees that edge nodes are awarded proportionally to their actual contribution to the network, preventing fraud and encouraging optimal performance.

 ‍

4.2 Overview of Proof-of-Relay

Proof-of-Relay (PoR) is a consensus algorithm designed specifically for our decentralized communication network. It verifies and records the successful relay of messages and media streams by edge nodes, serving as the basis for incentive distribution.

 ‍

4.3 Mechanism Details

1. Message Signing: Each message or stream segment is cryptographically signed by the originating client.

2. Relay Verification: As an edge node relays a message, it adds its own signature to a relay chain.

3. Destination Confirmation: The receiving client verifies the relay chain and sends a confirmation back through the network.

4. Blockchain Recording: Periodically, a summary of relay activities is recorded on the blockchain, including: network.

  • Number of messages relayed

  • Volume of data transferred

  • Network paths utilized

5. Smart Contract Execution: A smart contract processes the PoR data and calculates rewards based on predefined criteria.

 ‍

4.4 Fraud Prevention

The PoR mechanism incorporates several features to prevent fraudulent activities:

  • Multi-node Verification: Critical messages are routed through multiple nodes, requiring consensus for validation.

  • Random Audits: The system periodically initiates test messages to verify honest reporting by nodes.

 ‍

4.5 Performance Incentives

Beyond basic relay confirmation, the PoR system also factors in performance metrics to incentivize high-quality service:

  • Latency Measurements: Nodes that consistently deliver messages with lower latency receive higher rewards.

  • Uptime and Reliability: Nodes with higher uptime and fewer dropped messages are given preference in routing and rewards.

  • Bandwidth Contribution: Nodes that contribute more bandwidth to the network, especially during peak times, are rewarded accordingly.

 ‍

4.6 Dynamic Adjustment

The PoR mechanism is designed to adapt to changing network conditions:

  • Reward Rate Adjustment: The reward rate for relays is dynamically adjusted based on overall network supply and demand.

  • Geographic Balancing: Rewards are weighted to incentivize node operation in underserved regions.

  • Protocol Updates: The PoR algorithm can be updated through governance proposals to address emerging challenges or opportunities.

 

By implementing this robust proof-of-relay mechanism, our platform ensures a fair, transparent, and manipulation-resistant system for incentivizing node operators. This approach not only secures the network against potential exploits but also aligns the economic interests of node operators with the overall health and performance of the platform.

5. Conclusion

Ermis’ communication and streaming platform represents a significant leap forward in decentralized digital infrastructure. By combining the scalability of edge computing, the efficiency of centralized message brokers, and the security of blockchain technology, we have created a robust ecosystem capable of meeting the demands of next-generation communication and media streaming. The implementation of proof-of-relay mechanisms ensures fair incentivization for node operators, while our Rust-based architecture guarantees high performance and security. As we prepare to launch edge node sales, we stand at the cusp of a new era in decentralized communication. This platform not only addresses current market needs but is also poised to adapt to future technological advancements, setting a new standard for web3 applications. We invite developers, investors, and users to join us in shaping the future of decentralized, high-performance communication networks.