Intro
Protobuf (or Protocol Buffers) is a language-agnostic and platform-neutral serialization format invented at Google. Each protocol buffer message is a small logical record of information, containing a series of name-value pairs.
Unlike XML or JSON, here you first define the schema in a .proto file. They are a format like JSON but simpler, smaller, strictly typed, understandable only from the client to the server and faster to Marshall/Unmarshall. For example:
syntax = "proto3";
package gravatar;
service RouteGuide {
rpc GetFeature(Point) returns (Feature) {}
}
message Point {
int32 latitude = 1;
int32 longitude = 2;
}
message Feature {
string name = 1;
Point location = 2;
}
A message type is a list of numbered fields, and each field has a type and a name. After defining the .proto file, you run the protocol buffer compiler to generate code for the object (in the language of your choice), with get/set functions for the fields, as well as object serialization/deserialization functions. As you can see, you can package messages within namespaces as well.
Installation
We compile a protocol buffer using protoc compiler and the target file is generated for a programming language. For Go, the compiler generates a .pb.go file with a type for each message type in your file.
To install the compiler, run:
brew install protobuf
Then, create and initialize a new project inside your GOPATH:
mkdir profobuf-example
cd profobuf-example
go mod init
Next, install Go support for Google’s protocol buffers:
go get -u github.com/golang/protobuf/protoc-gen-go
go install github.com/golang/protobuf/protoc-gen-go
Finally, compile all .proto files:
protoc --go_out=. *.proto
This generates the .go file for the current proto file. We can also generate output for other languages like java.
Backward Compatibility
With numbered fields, you never have to change the behavior of code going forward to maintain backward compatibility with older versions. As the documentation states, once Protocol Buffers were introduced:
“New fields could be easily introduced, and intermediate servers that didn’t need to inspect the data could simply parse it and pass through the data without needing to know about all the fields.”
Schema evolution
A stub class generated by Protocol Buffers (that you generally never have to touch) can provide much of the JSON functionality without all of its headaches. As your schema evolves along with your proto generated classes (once you regenerate them, admittedly), leaving more room for you to focus on the challenges of keeping your application going and building your product.
Validations
The required, optional, and repeated keywords in Protocol Buffers definitions are extremely powerful. They allow you to encode, at the schema level, the shape of your data structure, and the implementation details of how classes work in each language are handled for you. Libraries will raise exceptions, for example, if you try to encode an object instance which does not have the required fields filled in. You can also always change a field from being required to being optionalor vice-versa by simply rolling to a new numbered field for that value. Having this kind of flexibility encoded into the semantics of the serialization format is incredibly powerful.
Language Interoperability
Because Protocol Buffers are implemented in a variety of languages, they make interoperability between polyglot applications in your architecture much simpler. If you’re introducing a new service in NodeJS, Go, or even Elixir, you simply have to hand the proto file to the code generator written in the target language and you have some nice guarantees about the safety and interoperability between those architectures.s
https://blog.lelonek.me/a-brief-introduction-to-grpc-in-go-e66e596fe244