Written by Naveen M
As part of our Kubernetes platform team, we face the constant challenge of providing real-time visibility into user workloads. From monitoring resource usage to tracking Kubernetes cluster activity and application status, there are numerous open-source solutions available for each specific category. However, these tools are often scattered across different platforms, resulting in a fragmented user experience. To address this issue, we have embraced the power of server-side streaming, enabling us to deliver live resource usage, Kubernetes events, and application status as soon as users access our platform portal.
By implementing server-side streaming, we can seamlessly stream data to the user interface, providing up-to-date information without the need for manual refreshes or constant API calls. This approach revolutionizes user experience, allowing users to instantly visualize the health and performance of their workloads in a unified and simplified manner. Whether it's monitoring resource utilization, staying informed about Kubernetes events, or keeping tabs on application status, our server-side streaming solution brings together all the critical information in a single, real-time dashboard, but this will be applicable to anyone who wants to provide live streaming data to the user interface.
Gone are the days of navigating through multiple tools and platforms to gather essential insights. With our streamlined approach, users can access a comprehensive overview of their Kubernetes environment the moment they land on our platform portal. By harnessing the power of server-side streaming, we have transformed the way users interact with and monitor their workloads, making their experience more efficient, intuitive, and productive.
Through our blog series, we aim to guide you through the intricacies of setting up server-side streaming with technologies such as React.js, Envoy, gRPC, and Golang.
There are three main components involved in this project:
1. The backend, which is developed using Golang and utilizes gRPC server-side streaming to transmit data.
2. The Envoy proxy, which is responsible for making the backend service accessible to the outside world.
3. The frontend, which is built using React.js and employs grpc-web to establish communication with the backend.
The series is divided into multiple parts to accommodate the diverse language preferences of developers. If you're interested specifically in the role of Envoy in streaming or want to learn about deploying an Envoy proxy in Kubernetes, you can jump to the second part (Envoy as a frontend proxy in Kubernetes) and explore that aspect or just interested in the front end part, then you can just check out the front end part of the blog.
In this initial part, we'll focus on the easiest segment of the series: "How to Set Up gRPC Server-Side Streaming with Go." We are going to show sample applications with server side streaming. Fortunately, there is a wealth of content available on the internet for this topic, tailored to your preferred programming language.
It's time to put our plan into action! Assuming you have a basic understanding of the following concepts, let's dive right into the implementation:
Now, let's start with code implementation.
Step 1: Create the Proto File
To begin, we need to define a protobuf file that will be used by both the client and server sides. Here's a simple example:
syntax = "proto3"; package protobuf; service StreamService { rpc FetchResponse (Request) returns (stream Response) {} } message Request { int32 id = 1; } message Response { string result = 1; }
In this proto file, we have a single function called FetchResponse that takes a Request parameter and returns a stream of Response messages.
Before we proceed, we need to generate the corresponding pb file that will be used in our Go program. Each programming language has its own way of generating the protocol buffer file. In Go, we will be using the protoc library.
If you haven't installed it yet, you can find the installation guide provided by Google.
To generate the protocol buffer file, run the following command:
protoc --go_out=plugins=grpc:. *.proto
Now, we have the data.pb.go file ready to be used in our implementation.
Step 3: Server side implementation
To create the server file, follow the code snippet below:
package main import ( "fmt" "log" "net" "sync" "time" pb "github.com/mnkg561/go-grpc-server-streaming-example/src/proto" "google.golang.org/grpc" ) type server struct{} func (s server) FetchResponse(in pb.Request, srv pb.StreamService_FetchResponseServer) error { log.Printf("Fetching response for ID: %d", in.Id) var wg sync.WaitGroup for i := 0; iIn this server file, I have implemented the FetchResponse function, which receives a request from the client and sends a stream of responses back. The server simulates concurrent processing using goroutines. For each request, it streams five responses back to the client. Each response is delayed by a certain duration to simulate different processing times.
The server listens on port 50005 and registers the StreamServiceServer with the created server. Finally, it starts serving requests and logs a message indicating that the server has started.
Now you have the server file ready to handle streaming requests from clients.Part 2
Stay tuned for Part 2 where we will continue to dive into the exciting world of streaming data and how it can revolutionize your user interface.
Disclaimer: All resources provided are partly from the Internet. If there is any infringement of your copyright or other rights and interests, please explain the detailed reasons and provide proof of copyright or rights and interests and then send it to the email: [email protected] We will handle it for you as soon as possible.
Copyright© 2022 湘ICP备2022001581号-3