Rust, Lua, and KumoMTA

  • August 3, 2023

After talking to many high-volume senders, we know there is a common need to employ a tactic whereby messages are selectively routed via different channels based on performance history or availability or for other reasons.  For instance, if you manage a large brand name, you may want to ensure that a particular segment of your message load is delivered via SendGrid and another segment is delivered via MailGun, and yet another is delivered through your own local MTA. That can potentially give you the ability to have more control over your overall delivery, but it can prove to be more challenging than it sounds to implement.

KumoMTA was designed from the ground up to be a message handler, not just an email engine.  To us, a message is a package of data that needs to be collected from one place and delivered to another for processing.  This could be an email accepted as SMTP or HTTP and then delivered via SMTP, as you would typically expect.  However, we might also deliver that email via HTTP to another service’s HTTP API for onward processing. Perhaps you want to translate the message and turn it into a push notification or SMS and then hand it to a service for alternate delivery.  Perhaps you have logs you want to transform and deliver as webhooks or as some other HTTP feed.  A message is a message is a message… you get the idea. Knowing that, something like the path shown below is entirely possible.

mghttp
So looking back at the initial need, what if we could inject a message stream into KumoMTA, where each message had a header or metadata value that could determine the routing of that message? Or what if we could route messages through certain ESPs based on their mailbox domain?  How about the option of just spreading messages by percentages where 20% of all messages go through SendGrid, and 10% of all messages go through SparkPost, but the rest are delivered from the local MTA? What if we got really crazy and measured the recent delivery success of each route, and dynamically changed the routing based on performance?

The diagram below roughly shows how one might arrange delivery routes in this scenario.

KumoRoutes

The core engine of KumoMTA is written in Rust, which makes it super fast, but for a configuration tool, we chose Lua as opposed to static files.  Lua is easy to read and write but can be very flexible, very fast, and can natively hook C if needed. Both Rust and Lua are established, supported languages with their own communities. Using Lua to script the powerful functions in Rust makes for a formidable tool.

As an example, one could configure the HTTP listener as below to read the “x-tenant” email header, then assign it to a variable called “tenant” for later use in the script. Of course, as a safety measure, we remove the x-tenant header so it does not pass through to the recipient.

kumo.on('http_message_generated', function(msg)
 -- Assign tenant based on X-Tenant header.
 local tenant = msg:get_first_named_header_value('x-tenant') or 'default'
 msg:set_meta('tenant',tenant)
 msg:remove_x_headers { 'x-tenant' }
end)

Then in the queue configuration, we use the tenant variable to route to the appropriate place.  In this example, we look for a value of “via_mg” or “via_sg” in the tenant variable, then use the appropriate custom Lua constructor if it exists.

kumo.on('get_queue_config', function(domain, tenant, campaign)
 -- Routing for Mailgun HTTP API
 if tenant == 'via_mg' then
   return kumo.make_queue_config {
     protocol = { custom_lua = {constructor = 'make.mailgun',},},}
      
 -- Routing for SendGrid HTTP API
 if tenant == 'via_sg' then
   return kumo.make_queue_config {
     protocol = {custom_lua = {constructor = 'make.sendgrid',},},}
 end
 return kumo.make_queue_config(params)
end)

And finally, the custom lua constructor for each service defines how we communicate with that service.

------------------------------------------------
--[[ Configure an HTTP injector for Mailgun ]]--
------------------------------------------------
kumo.on('make.mailgun', function(domain, tenant, campaign)
 local client = kumo.http.build_client {}
 local sender = {}
  -- Get credentials from Vault
-- (https://docs.kumomta.com/reference/kumo.secrets/load/)
 local mg_apikey = kumo.secrets.load { vault_mount = 'secret', vault_path = 'mailgun_apikey',}

 function sender:send(message)

   local request = client:post 'https://api.mailgun.net/v3/YOUR_DOMAIN/messages.mime'
   request:basic_auth('api', mg_apikey)
   request:form_multipart_data {
     to = message:recipient().email,
     message = {data = message:get_data(), file_name = "mime.msg", }
   }
   -- Make the request
   local response = request:send()

   -- and handle the result
   local disposition = string.format(
     '%d %s %s',
     response:status_code(),
     response:status_reason(),
     response:text()
   )
   if response:status_is_success() then
     -- Success!
     return disposition
    end

   -- Failed!
   kumo.reject(400, disposition)
 end
 return sender
end)

Even if you are unfamiliar with Lua, you can probably see that the language is relatively simple to read and write.  We define and assign variables, use IF-THEN statements for branching logic, and return success or fail conditions to the calling functions. Using Lua in this way means you are free to create whatever construct you can imagine without being constrained to predefined patterns.  This external routing example is only one of the myriad possibilities you could employ with Lua, Rust and KumoMTA.

You might also notice that we can use HashiCorp Vault to pull secrets when needed.  This can be incredibly handy when you need to call a variety of secure resources dynamically and don't want to hard-code that data.  We will touch on that more in future posts as well.

Of course, not everyone will want to write Lua scripts or figure out their own deployments.  That is why we offer custom Professional Services and Support programs for all sizes of customers.  Several resources are available if you want to explore on your own, or you can reach out to discuss a custom implementation to fit your specific needs.