Extending Envoy with WebAssembly Proxy Filters
Amir Keshavarz
Posted on July 12, 2022
Introduction To Envoy
Envoy is an open-source edge and service proxy designed for cloud-native applications.
We can use Envoy as a gateway or a sidecar proxy in scenarios where we need connectivity between services.
It works by applying a number of filters on the data that goes through its proxy.
We can read or manipulate the data using built-in or custom filters/plugins via a filter chain architecture. This design has a proven record for this kind of application, as we've also seen a similar design in NGINX and other reverse proxies.
Envoy Filters
As explained above, we can use filters to do all kinds of stuff on the data received by the listener. Envoy already has many built-in filters written in C++ that you can use.
You can also write your custom filters if any built-in filters aren't suitable for you. These filters can be written in the native C++, Lua, or WASM.
We'll focus on WebAssembly filters in this article.
WebAssembly
WebAssembly is a portable binary instruction format for virtual stack machines.
It aims to execute at native or near-native speed by assuming a few characteristics of the execution environment.
WebAssembly is designed to be executed in a fully sandboxed environment with linear memory. Because of that design, the binary is entirely blind to the host environment. That being said, we can set up some kind of communication by reading memory or importing functions. But don't worry about that since we won't need to implement these interfaces ourselves.
Proxy-Wasm
Proxy-Wasm is a set of ABI specifications to use between L4/L7 proxies (and/or other host environments) and their extensions delivered as WebAssembly modules.
Even though it was first designed for Envoy, other proxy projects have also picked it up as the ABI spec for their WASM integration.
Writing Filters
We've chosen AssemblyScript to write our filter. AssemblyScript is a TypeScript-like designed for WebAssembly.
To interact with the proxy-wasm host, We'd need to export a set of low-level interfaces to the host, but fortunately, There's a package called solo-io/proxy-runtime
that does the most of the heavy lifting for us. See here for code samples
Create a project by using these commands:
npm install --save-dev assemblyscript
npx asinit .
add --use abort=abort_proc_exit
to the asc
in packages.json:
"asbuild:optimized": "asc assembly/index.ts -b build/optimized.wasm --use abort=abort_proc_exit -t build/optimized.wat --sourceMap --optimize",
assembly/index.ts
is now our entrypoint to the WebAssembly module.
Open it and use this sample code as a demo:
// @ts-ignore
export * from "@solo-io/proxy-runtime/proxy";
import { RootContext, Context, registerRootContext, FilterHeadersStatusValues, stream_context } from "@solo-io/proxy-runtime";
class AddHeaderRoot extends RootContext {
createContext(context_id: u32): Context {
return new AddHeader(context_id, this);
}
}
class AddHeader extends Context {
constructor(context_id: u32, root_context: AddHeaderRoot) {
super(context_id, root_context);
}
onResponseHeaders(a: u32, end_of_stream: bool): FilterHeadersStatusValues {
stream_context.headers.response.add("Hello", "World!");
return FilterHeadersStatusValues.Continue;
}
}
registerRootContext((context_id: u32) => { return new AddHeaderRoot(context_id); }, "add_header");
proxy-wasm
has a few hook points you can use to inject your module. We used onResponseHeaders
/ proxy_on_response_headers
here to add our custom header to the response. Using this filter, we should see an additional header in the response.
Hello: World!
Now It's time to run Envoy and configure it to use our module.
I recommend using func-e
as it makes installing and running Envoy extremely easy. More Info
curl https://func-e.io/install.sh | bash -s -- -b /usr/local/bin
Open envoy.yaml
and configure your Envoy. As we discussed before, Envoy works by chaining several filters, starting from TCP connections to http and more.
I've put a sample config here, but we'll go through each section to explain what the configuration means:
admin:
address:
socket_address: { address: 127.0.0.1, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 127.0.0.1, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route: { cluster: service_0 }
http_filters:
- name: envoy.filters.http.wasm
typed_config:
"@type": type.googleapis.com/udpa.type.v1.TypedStruct
type_url: type.googleapis.com/envoy.extensions.filters.http.wasm.v3.Wasm
value:
config:
name: "add_header"
root_id: "add_header"
vm_config:
vm_id: "my_vm_id"
runtime: "envoy.wasm.runtime.v8"
code:
local:
filename: "/path/to/optimized.wasm"
allow_precompiled: false
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
clusters:
- name: service_0
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: service_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 80
- admin: here, we tell Envoy where to listen for its admin interface. The admin interface exposes some administration and monitoring functionalities.
- static_resources: here is where the most exciting stuff happens. We set up our listeners and filters here.
- static_resources.listeners: we setup our actual listeners here.
-
static_resources.filter_chains: All traffic filters are configured here. We have the
http_connection_manager
filter as ingress for our http traffic. This filter hashttp_filters
, which defines what http filters to run. That's the part where we set up our module. -
static_resources.filters[0].http_filters[0]: this is our WebAssembly module. We specify where to find the module itself (
/path/to/optimized.wasm
) and what is the root (add_header
). - clusters: we put our origin services here.
To run Envoy, simply run this command:
func-e run -c /path/to/envoy.yaml
Now try it out and you should see the additional header:
curl -I http://127.0.0.1:8080
Response:
HTTP/1.1 200 OK
content-type: text/html
date: Mon, 11 Jul 2022 23:59:07 GMT
server: envoy
content-length: 345
x-envoy-upstream-service-time: 196
Hello: World!
Yay! We just used a WebAssembly module to inject custom headers into the response.
Conclusion
Even though this post only touches on the tip of the iceberg, It represents what proxy-wasm
is capable of. You can write your filters in whatever language you desire, and It'll run at near-native speeds.
Many companies are investing hugely in server-side WebAssembly, and more exciting things are to come.
I'm also trying to write more on WebAssembly and its future if you're interested in the subject.
Links
Posted on July 12, 2022
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.