Just as the information technology revolution heralded the proliferation of PCs and the mobility revolution the growth of the smartphone, it is expected that the IoT (Internet of Things) revolution will herald the proliferation of smart connected devices.
Estimates vary as to the exact number, but most experts agree that over the next 10 years there will be billions of additional smart devices connected to the network. Consider the fact that each device will be generating data at regular intervals; this would result in data being generated at volumes unimaginable in previous eras.
The Need for Scale
As a central player in the processing of customer life cycle and usage data, the billing system needs to be able to handle and process large volumes of data in real-time. This requires an architecture that is distributed in nature and designed for scale.
The amount of data throughput into an ERP (Enterprise Resource Planning) or billing system, does not stay constant over time. There are periods of high activity and periods when the system is essentially inactive. Usage from IoT sensors may be sporadic in nature.
Connectivity service providers might just upload an entire periods' worth of usage data in one shot, resulting in a spike in processing requirements to handle that burst. Once the data is processed, the system returns to steady-state. This requires the system to scale elastically up and down to handle these times of high and low throughput.
Without elastic scaling, customers would be paying significantly extra on their cloud hosting costs.
Horizontally Distribute the Load via Microservices and Queues
Besides vertical scaling, Enterprise systems also need to scale horizontally to distribute and parallelize their load over multiple queues.
Partitioning ERP systems into microservices is extremely scalable as each microservice can replicate itself to handle larger throughputs.
Aggregate Data to Minimize Usage Volumes
Sometimes significant data is generated where vertical and horizontal scaling may not be enough and more needs to be done.
Congestion over the network also needs to be considered in scenarios where large amounts of data are being sent from external devices into the system. In these scenarios it is important to aggregate the data so as to minimize the number of usage records being processed. Aggregation is a complex topic and care needs to be taken during the aggregation process to ensure granular attributes specific to each individual record is not lost during aggregation.
Leverage Enterprise-Grade Database Technology
Solution evaluators rarely consider database technology when considering the impact of throughput. However, databases can very quickly become a bottleneck in the system and act as a throughput constraint.
Attributes such as fill factor and memory needs to be constantly optimized and fragmentation minimized through efficient indexing. Online indexing in particular can provide significant performance improvements on large-scale deployments.
How We Can Help
The LogiSense platform is designed in a modular fashion for deployment, cost and functional scalability.
Customers have a choice of multiple deployment options. The application is partitioned into a set of microservices and load is distributed evenly across multiple billing and rating threads.
Small database table sizes lead to less fragmentation and fast access times while the underlying databases are constantly monitored and optimized for performance.
Chris has over 20 years of experience in the telecommunications industry dealing with Fortune 2000 enterprises and service providers globally. As VP of Innovation Chris interacts in the market and internally providing: technical leadership and vision, mentorship, and creative problem-solving.