Jack McCallum's Personal Site

Building a Distributed Phoenix App with Docker Swarm

Table of Contents


This post will cover setting up a distributed Phoenix application using Docker Swarm and Erlang Releases. The idea here is to get a good distributed environment up locally that will behave like a clustered application on an orchestration service like Kubernetes or ECS. The reason for this post was that I was having a hard time understanding how to go about getting an Elixir cluster up and running that was more complicated than the standard example of running multiple iex sessions with different snames. I was figuring it out as I went along so please let me know if there are any glaring omissions! Beware. It is long.

Creating the App

First let's create our Phoenix project. We are going to keep the application itself very simple so we can focus on the interesting bits. Our application will be a Grocery List tracker. Via a JSON API, you will be able to create any number of grocery lists, add or update items in your grocery lists, and list the contents of your lists.

% mix distributed_groceries --no-ecto --no-brunch --no-html

Add a controller to handle our grocery list operations.

# lib/distributed_groceries_web/controllers/grocery_lists_controller.ex

defmodule DistributedGroceriesWeb.GroceryListsController do
  use DistributedGroceriesWeb, :controller

  def create(conn, %{"name" => name}) do
    case lookup_pid(name) do
      nil ->
        {:ok, pid} = Agent.start(fn -> %{} end)
        register_pid(pid, name)
        json(conn, "Success!: #{name} was created")
      _ ->
        json(conn, "Already Registered")

  def update(conn, %{"id" => name, "item" => item, "quantity" => num}) do
    case lookup_pid(name) do
      nil ->
        json(conn, "#{name} not found")
      pid ->
        Agent.update(pid, fn state -> Map.put(state, item, num) end)
        json(conn, "Success!")

  def show(conn, %{"id" => name}) do
    case lookup_pid(name) do
      nil ->
        json(conn, "#{name} not found")
      pid ->
        json(conn, Agent.get(pid, fn state -> state end))

  defp lookup_pid(name) do
    |> String.to_atom
    |> GenServer.whereis

  defp register_pid(pid, name) do
    Process.register(pid, String.to_atom(name))

What we are doing here is creating an Agent to hold the state of the grocery list and then registering the pid with the Process registry so we can look it up later. When we update or fetch our list, we use GenServer.whereis to get our pid and then go from there.

Disclaimer This is a bad way to create and hold state. For one, we are allowing users to create atoms in the system which is a big no-no. It is, however, a good and simple way to demonstrate distribution so we are going with it. Also as you'll see, we won't have this solution in the end.

Now let's add our routes.

# lib/distributed_groceries_web/router.ex

defmodule DistributedGroceriesWeb.Router do
  use DistributedGroceriesWeb, :router

  pipeline :api do
    plug :accepts, ["json"]

  scope "/", DistributedGroceriesWeb do
    pipe_through :api
    resources "/grocery_lists", GroceryListsController, only: [:create, :update, :show]

Start our server.

% mix phx.server

And test that it is working!

# Create our List
% curl -X POST \ \
  -H 'Content-Type: application/json' \
  -d '{ "name": "my_shopping_list" }'

"Success!: my_shopping_list was created

# Add an Item/Quantity
% curl -X PUT \ \
  -H 'Content-Type: application/json' \
  -d '{ "item": "Onions", "quantity": 2 }'


# Fetch our List
% curl


At this point we have a very basic Phoenix Application running in development mode. Next we are going to Dockerize the app, and use Erlang Releases instead of our mix phx.server command.

Getting Our App Ready For Production

First let's configure out application to use releases. There are many benefits of using Releases over the regular mix phx.server, the biggest of which is that you can take full advantage of the BEAM virtual environment that comes with Erlang.

To build our release we will use the wonderful Distillery library. Distillery makes building a release a breeze, and the final output is a simple tar bundle that contains not just our application, but the entire Erlang runtime needed to start it! How cool is that.

First we add distillery to our mix.exs file and fetch it.

defp deps do
    {:phoenix, "~> 1.3.4"},
    {:phoenix_pubsub, "~> 1.0"},
    {:gettext, "~> 0.11"},
    {:cowboy, "~> 1.0"},
    {:distillery, "~> 2.0"},
% mix deps.get

Then we need to initialize the release configuration using a handy mix task.

mix release.init

You will see that this generates a rel folder that will handle all configuration we need for our release. If you aren't already familiar with releases I highly recommend spending some time going through the Distillery Docs as there is quite a lot to it so I'll be breezing through it here a bit.

Our config file will look like this. Very sparse.

# rel/config.exs

use Mix.Releases.Config

environment :prod do
  set include_erts: true
  set include_src: false

release :distributed_groceries do
  set version: current_version(:distributed_groceries)
  set applications: [

We need to change some things in our Phoenix config as well for prod. Our new prod config file will look like this.

# config/prod.exs

use Mix.Config
config :distributed_groceries, DistributedGroceriesWeb.Endpoint,
  http: [port: {:system, "PORT"}],
  url: [host: "localhost", port: {:system, "PORT"}],
  secret_key_base: "my_super_secret_key",
  server: true

config :logger, level: :info

The only thing of note here is this {:system, "PORT"} syntax. That lets you set the port using an Environment Variable that is evaluated at runtime. This is a special thing that Phoenix Endpoint does, and should not be used elsewhere.

You can test if everything is set up right by running:

% MIX_ENV=prod mix release --env=prod
# a lot of output

% PORT=4000 _build/prod/rel/distributed_groceries/bin/distributed_groceries console
# You should be in the iex console. Exit and let's delete the _build as we won't be using it

% rm _build

Awesome, we've now got a ready for prod release. We've come a long way already. You could deploy this to EC2 and be up and running in a jiff. Next we are going to Dockerize the app.

Enter: Docker

Create a Dockerfile in the root of your app with the following.

FROM bitwalker/alpine-elixir-phoenix as builder



COPY mix.* ./

RUN mix deps.get && mix deps.compile

COPY . .

RUN mix release --env=prod

# Runtime
FROM alpine:3.7

# We need bash and openssl for Phoenix
RUN apk upgrade --no-cache add git wget && \
    apk add --no-cache bash openssl

ENV MIX_ENV=prod \
    SHELL=/bin/bash \


COPY --from=builder /app/_build/prod/rel/distributed_groceries/releases/0.0.1/distributed_groceries.tar.gz .

RUN tar -xzf distributed_groceries.tar.gz && rm distributed_groceries.tar.gz

CMD ["./bin/distributed_groceries", "foreground"]

This is a multi-stage Docker build that lets us do the heavy lifting in a container that has Elixir, and then deploy our final release to a bare bones Alpine image to keep it very small.

Our docker-compose.yml file will look like this

version: '3.4'
      context: .
      - "4000:4000"
      PORT: 4000

Run our app with docker-compose (I have it aliased docker-compose to dc so that is what you will see from here on)

dc up --build -d

And after building the image you will be able to hit the same routes as we did when we were running it with mix phx.server.

% curl -X POST \ \
  -H 'Content-Type: application/json' \
  -d '{ "name": "my_shopping_list" }'

"Success!: my_shopping_list was created

We now have a dockerized Phoenix app running as an Elixir Release! How good is that.

Make it a Swarm!

Now we are getting to the good stuff. Lets make it so 3 replicas of our app run at the same time. This is to simulate in a production environment where you would want to scale horizontally with multiple servers and put them all behind a load balancer. I am using docker swarm to simulate this as it is easy to do locally and comes with Docker for Mac. Swarm does many of the same things that Kubernetes or Amazon Elastic Container Services do so the lessons should apply well to whatever you use. If you want to learn more about it check it out here.

We are now going to setup our swarm. The only confusing part is that swarm requires your image to be stored on a registry. Instead of actually pushing it to Docker Hub or a real registry I am just going to create a temp one locally as a service.

# Initialize the Swarm
% docker swarm init
Swarm initialized: current node (7pmx5ggk5dg9jp613q5w4hqot) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-0inm1jz9w89zoyrze5mx1zs905zyqsga6mp5f9kyfynmn7ho6u-c3t58txu6r3lbvjf9m06uu236

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

# Create our Registry
% docker service create --name registry --publish published=5000,target=5000 registry:2
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Service converged

Now we need to change our docker-compose.yml file to give our swarm some instructions.

version: '3.4'
    build: .
      PORT: 4000
      replicas: 3
          cpus: "0.1"
          memory: 100M
        condition: on-failure
      - "4000:4000"
      - distributed_groceries


This is saying that we want 3 copies of our application running and we want to restart them on failure. Our image is pointed at the new registry we just created in the prior step which listens on port 5000.

We need to rebuild our app image and push it to our new registry:

% dc build app
.... output

% dc push
WARNING: Some services (app) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
Pushing app (
The push refers to repository []
cfff939346b9: Pushed
d468f013f98a: Pushed
4784e5ddfe23: Pushed
f5ab6df426ce: Pushed
cd7100a72410: Pushed
latest: digest: sha256:d6ca13e61d9366d749b86f0d54001fdfd8ac8a71934147ceffbb75b3f416d647 size: 1369

Now we can launch our service!

% docker stack deploy -c docker-compose.yml distributed_groceries
Ignoring unsupported options: build

Creating network distributed_groceries_distributed_groceries
Creating service distributed_groceries_app

Check the status of it like so:

% docker service ls
ID                  NAME                        MODE                REPLICAS            IMAGE                                         PORTS
vqh8kc006kbg        distributed_groceries_app   replicated          3/3          *:4000->4000/tcp
mpmamnzhskqq        registry                    replicated          1/1                 registry:2                                    *:5000->5000/tcp

Tail the logs, and after about 30 seconds you should see them actually come to life. Keep that log tail going in a separate window as we are going to use it next.

% docker service logs -f distributed_groceries_app
[email protected]    | 12:23:52.612 [info] Running DistributedGroceriesWeb.Endpoint with Cowboy using
[email protected]    | 12:23:54.699 [info] Running DistributedGroceriesWeb.Endpoint with Cowboy using
[email protected]    | 12:23:55.101 [info] Running DistributedGroceriesWeb.Endpoint with Cowboy using

Now let's send some requests to our new cluster and see what happens.

% curl -X POST \ \
  -H 'Content-Type: application/json' \
  -d '{ "name": "new_list" }'

"Success!: new_list was created"

% curl -X POST \ \
  -H 'Content-Type: application/json' \
  -d '{ "name": "new_list" }'

"Success!: new_list was created"

% curl -X POST \ \
  -H 'Content-Type: application/json' \
  -d '{ "name": "new_list" }'

"Success!: new_list was created"

% curl -X POST \ \
  -H 'Content-Type: application/json' \
  -d '{ "name": "new_list" }'

"Already Registered"

I was able to create three grocery lists with the same name before it finally said "Already Registered". Looking at the logs tells me exactly what happened.

distributed_groceries_app.1 12:28:00.943 [info] POST /grocery_lists
distributed_groceries_app.1 12:28:00.994 [info] Sent 200 in 50ms
distributed_groceries_app.2 12:28:03.307 [info] POST /grocery_lists
distributed_groceries_app.2 12:28:03.307 [info] Sent 200 in 179µs
distributed_groceries_app.3 12:28:07.794 [info] POST /grocery_lists
distributed_groceries_app.3 12:28:07.795 [info] Sent 200 in 237µs
distributed_groceries_app.1 12:28:10.106 [info] POST /grocery_lists
distributed_groceries_app.1 12:28:10.107 [info] Sent 200 in 229µs

Notice the number at the end of distributed_groceries_app on each line. We were creating the lists on different nodes, so each one thought that no process existed! This is because our Process.register is a purely local function, and our nodes have no idea about each other. If you kept messing around with the calls, you would add onions on node 1, then fetch from node 2 and there would be no onions! There is no way to know which node the load balancer will send our request to either. Something must be done. Folks, we have arrived at the most exciting part. We are going distributed.

Going Distributed!

We need to do two things in order to make this app distributed.

  1. Change our code
  2. Change our release

Lets do the code first.

Code Change
# lib/distributed_groceries_web/controllers/grocery_lists_controller.ex

defmodule DistributedGroceriesWeb.GroceryListsController do
  use DistributedGroceriesWeb, :controller

  def create(conn, %{"name" => name}) do
    case lookup_pid(name) do
      :undefined  ->
        {:ok, pid} = Agent.start(fn -> %{} end)
        register_pid(pid, name)
        json(conn, "Success!: #{name} was created")
      _ ->
        json(conn, "Already Registered")

  def update(conn, %{"id" => name, "item" => item, "quantity" => num}) do
    case lookup_pid(name) do
      :undefined ->
        json(conn, "#{name} not found")
      pid ->
        Agent.update(pid, fn state -> Map.put(state, item, num) end)
        json(conn, "Success!")

  def show(conn, %{"id" => name}) do
    case lookup_pid(name) do
      :undefined ->
        json(conn, "#{name} not found")
      pid ->
        json(conn, Agent.get(pid, fn state -> state end))

  defp lookup_pid(name) do
    :global.whereis_name({:grocery_list, name})

  defp register_pid(pid, name) do
    :global.register_name({:grocery_list, name}, pid)

Our controller code change is pretty simple. We are now going to use the erlang :global registry, which has a bit of a different interface. First, it takes strings instead of atoms as its lookup value, so we no longer have our infinite atom issue. Secondly it returns :undefined instead of nil when it can't find a pid. All in all though, a pretty easy change to make a controller distributed.

What this gets us is easy synchronization between nodes. The :global registry keeps all nodes in sync by a gossiping protocol. Each node actually keeps a separate copy of the registry locally, so a read is very cheap, its just like our local Process registry. When we write to the registry, the node that receives the message adds it to its Register then sends messages to all the nodes in the cluster to tell them about the new item.

Release Change

Now let's deal with our release.

# rel/config.exs

use Mix.Releases.Config

environment :prod do
  set include_erts: true
  set include_src: false
  set vm_args: "rel/vm.args"
  set pre_configure_hooks: "rel/hooks/pre_configure.d"

release :distributed_phx do
  set version: current_version(:distributed_phx)
  set applications: [

We are going to need to add a hook to our build process that sets an environment variable of the docker containers IP Address that we can then reference later. In our config file we tell Distillery that we have this hook, and then we can put it here:

# rel/hooks/pre_configure.d/

#!/usr/bin/env bash

echo "Starting Up!"
export CONTAINER_IP=$(hostname -i)

Also we have custom args for the vm, which we will put in rel/vm.args

# rel/vm.args

-name [email protected]${CONTAINER_IP}
-smp auto
-setcookie my_cookie

The key thing here is the name in vm.args. This will be how our nodes connect to each other. Each node will have a name like [email protected], with its own unique IP address that we set by means of the pre_configure hook. The only thing left to do is figure out how to find other node IP addresses. To do this we are going to use some of Swarm's DNS functionality. Go into one of our deployed containers like so.

% docker container ls

CONTAINER ID        IMAGE                                         COMMAND                  CREATED             STATUS              PORTS               NAMES
0494238ace12   "./bin/distributed_g…"   23 minutes ago      Up 25 minutes                           distributed_groceries_app.2.94lk7bmc1w7yqi8ihlor7ix2b
c3629e03df60   "./bin/distributed_g…"   23 minutes ago      Up 25 minutes                           distributed_groceries_app.1.kohz2jojy1hb5qqj43ktklvtr
3aec15b6cdf1   "./bin/distributed_g…"   23 minutes ago      Up 25 minutes                           distributed_groceries_app.3.gag8oumajklvh7ql1kpqotlfu

# Pick any of them
% docker exec -it distributed_groceries_app.2.94lk7bmc1w7yqi8ihlor7ix2b bash

# Do an nslookup
bash-4.4# nslookup tasks.distributed_groceries_app
nslookup: can't resolve '(null)': Name does not resolve

Name:      tasks.distributed_groceries_app
Address 1: 0494238ace12
Address 2: distributed_groceries_app.1.kohz2jojy1hb5qqj43ktklvtr.distributed_groceries_distributed_groceries
Address 3: distributed_groceries_app.3.gag8oumajklvh7ql1kpqotlfu.distributed_groceries_distributed_groceries

Here we can see that tasks.distributed_groceries_app in Swarm will give us all the IP addresses we need for our service. This is the key step that will be different in whatever orchestration environment you are using.

We are going to use that information in our startup routine to connect to all other service nodes.

# lib/distributed_groceries/application.ex

defmodule DistributedGroceries.Application do
  use Application

  # See
  # for more information on OTP Applications
  def start(_type, _args) do
    import Supervisor.Spec


    # Define workers and child supervisors to be supervised
    children = [
      supervisor(DistributedGroceriesWeb.Endpoint, []),

    # See
    # for other strategies and supported options
    opts = [strategy: :one_for_one, name: DistributedGroceries.Supervisor]
    Supervisor.start_link(children, opts)

  # Tell Phoenix to update the endpoint configuration
  # whenever the application is updated.
  def config_change(changed, _new, removed) do
    DistributedGroceriesWeb.Endpoint.config_change(changed, removed)

  def connect_to_cluster do
    # Docker internal DNS lookup
    {string, _} = System.cmd("nslookup", ["tasks.distributed_groceries_app"])

    # Fetch IPs from String
    ips =
      Regex.scan(~r/10\.[0-9]\.[0-9]\.\d{1,}/, string)
      |> List.flatten
      |> Enum.reject(
        fn x -> x == System.get_env("CONTAINER_IP")

    # Connect to Nodes, fn ip ->
      Node.connect(:"[email protected]#{ip}")

This will be our startup file. Notice that we are calling connect_to_cluster in the start function that all OTP applications must implement. This is how our Application will start and right off the bat we will connect to all the nodes in our network. This is certainly not the most elegant or efficient method, as every node will try to connect to every other node when only one connection is necessary for a new node, but it is pretty clear and simple enough for our purposes.

Lets build our app, push it to the registry, and deploy it.

% dc build

% dc push

% docker stack deploy -c docker-compose.yml distributed_groceries

Redeploying can take up to 1 minute so keep an eye on the log to see when the containers start up.

Let's check if they are connected by going into a remote_console and looking at connected nodes.

% docker ps
distributed_groceries master % docker ps
CONTAINER ID        IMAGE                                         COMMAND                  CREATED                  STATUS              PORTS               NAMES
87daf4853a7c   "./bin/distributed_g…"   Less than a second ago   Up 44 seconds                           distributed_groceries_app.1.u6jz28sf7qg9lz2zbk6kejpf2
c6e184c10203   "./bin/distributed_g…"   Less than a second ago   Up About a minute                       distributed_groceries_app.3.wm27sfhhnrjijhjl25pozj8z6
373b82d83c1e   "./bin/distributed_g…"   Less than a second ago   Up About a minute                       distributed_groceries_app.2.pm41id1jwae63kz69fy7dgu35
c150d1f1d283        registry:2                                    "/ /etc…"   About an hour ago        Up About an hour    5000/tcp            registry.1.7qeyf4p3h64wno9y7rv0kovy8

# Grab any one (this can take a while)
% docker exec -it distributed_groceries_app.1.u6jz28sf7qg9lz2zbk6kejpf2 bin/distributed_groceries remote_console
Starting Up!
Erlang/OTP 21 [erts-10.0.3] [source] [64-bit] [smp:2:2] [ds:2:2:10] [async-threads:1] [hipe]

Interactive Elixir (1.7.1) - press Ctrl+C to exit (type h() ENTER for help)

iex([email protected])1> Node.list
[:"[email protected]", :"[email protected]"]

Sweet. We can see that our node is part of a cluster with the other two nodes. This means that when we send some test requests we should be able to see that everything will keep in sync.

% curl -X POST \ \
  -H 'Content-Type: application/json' \
  -d '{ "name": "my_shopping_list" }'

"Success!: my_shopping_list was created"
 curl -X PUT \ \
  -H 'Content-Type: application/json' \
  -d '{ "item": "Avocados", "quantity": 2 }' 


% curl


It all looks correct, and the cool part is that if you look at our logs those requests all hit a different node.

distributed_groceries_app.3| 07:48:16.246 [info] POST /grocery_lists
distributed_groceries_app.3| 07:48:16.250 [info] Sent 200 in 3ms
distributed_groceries_app.1| 07:48:29.366 [info] PUT /grocery_lists/my_shopping_list
distributed_groceries_app.1| 07:48:29.447 [info] Sent 200 in 81ms
distributed_groceries_app.2| 07:48:36.551 [info] GET /grocery_lists/my_shopping_list
distributed_groceries_app.2| 07:48:36.554 [info] Sent 200 in 2ms

Our global process registry knows what node each grocery list Agent is on, and can send messages to it regardless of where the request hits. This is due to Elixir's location transparency with processes, and is why making Elixir apps distributed is less grueling than it is in other languages and frameworks. One final thing that is pretty cool is that we can scale our cluster and everything will stay in sync. Lets use Swarm's built in scaling to demonstrate this.

% docker service update distributed_groceries_app --replicas 5
overall progress: 5 out of 5 tasks
1/5: running   [==================================================>]
2/5: running   [==================================================>]
3/5: running   [==================================================>]
4/5: running   [==================================================>]
5/5: running   [==================================================>]
verify: Service converged

We added two more nodes, and after they start up we can see in our remote console that they are now part of our node cluster.


That was quite a long post, nice work if you made it this far. Using Docker and Swarm we are able to do some pretty cool things with Elixir right on our laptops. Being able to do this locally allows us to be much more confident when the times comes to do it in production. This application will also let you try all kinds of different distributed strategies, such as more intelligent node discovery, redundancy of processes between nodes, and distributed supervision strategies. All the code from this can be found here, with each section as a separate commit:

Additional Resources

One Time Pad in Elixir

Recently I have been very interested in learning about cryptography. I just finished Crypto by Steven Levy which gives a great background to the crypto field and the advances that enabled the current 'Crypto Craze'. I am also working my way through the Stanford Cryptography Course on Coursera which dives deep on some implementations. Given all that, I thought it would be fun to put some of the ideas and learnings into code, and since I am still working on my Elixir skills, lets do it in that. Elixir is actually a nice language for crypto for two reasons:

1) You can use the underlying Erlang crypto library which has many powerful functions.

2) Crypto lends itself very well to functional programming as you are generally passing messages through a number of transformations.

Some of the functions are pretty elegant when seen in Elixir code.

One Time Pad

In this post we will be implementing a basic one time pad. A one time pad is a fundamental concept in cryptography, as it is a cipher that cannot be cracked. A cipher is a means of obscuring a message so that it cannot be read by someone who intercepts it. Many types of ciphers have been used throughout history, and most have been cracked. The one time pad is unique in that it is not just hard to crack, it is mathematically impossible to crack. Unfortunately the one time pad is not all that practical as the key must be the same length or longer as the message you are wishing to encrypt, and can only ever been used once. In practice this means that if you are able to get someone the key in a secure way, you might as well just give them the message that way too.

Nevertheless, lets implement it.

defmodule OneTimePad do
  def encode(message), do: encode(message, message |> String.length |> generate_key)
  def encode(message, key) when byte_size(message) * 2 == byte_size(key) do
    cypher = message
             |> Base.encode16
             |> :crypto.exor(key)
             |> Base.encode16

    IO.puts "Key: #{key}"
    IO.puts "Cypher: #{cypher}"

  def encode(_, _), do: IO.puts "Incorrect Byte Sizes"

  def decode(cypher, key) when byte_size(key) * 2 == byte_size(cypher) do
    raw = cypher
          |> Base.decode16!
          |> :crypto.exor(key)
          |> Base.decode16!

    IO.puts "Cypher: #{cypher}"
    IO.puts "Decoded Message: #{raw}"

  def decode(_, _), do: IO.puts "Incorrect Byte Sizes"

  defp generate_key(length) do
    |> Base.encode16

Running it:

Interactive Elixir (1.6.0) - press Ctrl+C to exit (type h() ENTER for help)

iex(1)> cypher = OneTimePad.encode("The secret lies with Charlotte")
Key: A52D26CAC642EB7F6A603CE37EE4F15FD3EAA1D08FEB6317BADA7FA6D9E2
Cypher: 7401047C0403717174050207737100740074010401737370017C73017102077673047378760572080A767171000B07067573720201007602730D7307

iex(2)> key = "A52D26CAC642EB7F6A603CE37EE4F15FD3EAA1D08FEB6317BADA7FA6D9E2"

iex(3)> OneTimePad.decode(cypher, key)
Cypher: 7401047C0403717174050207737100740074010401737370017C73017102077673047378760572080A767171000B07067573720201007602730D7307
Decoded Message: The secret lies with Charlotte
"The secret lies with Charlotte"

Whats with all the Base.(de|en)code16! calls? Well that is to make the binaries prinatable in iex. I am curious if anyone has a better way to do it in Elixir - please let me know in the comments.

Fantasy Football GroupMe Stats

Two years of Fantasy Football conversation in a GroupMe chat visualized using my GroupMe Stats script and Circos. GroupMe is a messaging application where users can 'Like' other messages. My friends use this to like funny, clever, relevant, etc posts. This has lead to some interesting data over the years.

Posts Liked By Member

Chord Chart

This chord chart shows 'likes' given and received by each member of our Fantasy Football GroupMe.

Data for chart above (plug it in here

labels Roland Christian Matt Dan Kyle Mike Chuck Kevin Jack Luke Derek Gavin 
Roland 0 3 11 56 19 18 9 38 15 12 3 31 
Christian 0 0 3 2 6 4 3 4 1 3 0 11 
Matt 8 9 0 40 53 25 32 60 37 30 12 71 
Dan 18 5 63 0 65 29 35 112 39 39 16 101 
Kyle 9 7 39 32 0 21 22 60 29 25 4 56 
Mike 7 8 26 44 33 0 5 34 7 27 4 27 
Chuck 0 3 9 13 4 11 0 8 4 5 0 13 
Kevin 6 2 28 37 25 19 16 0 20 27 7 55 
Jack 8 6 22 28 23 5 9 33 0 14 3 45 
Luke 11 11 32 42 34 27 19 58 21 0 8 42 
Derek 2 0 6 2 3 0 1 3 2 4 0 3 
Gavin 13 6 41 68 38 24 18 53 29 23 9 0 

Additional Stats

Post to Like Ratios
Roland: 72 messages and 82 likes for a ratio of 1.14 likes per message
Christian: 31 messages and 60 likes for a ratio of 1.94 likes per message
Matt: 223 messages and 281 likes for a ratio of 1.26 likes per message
Dan: 521 messages and 366 likes for a ratio of 0.7 likes per message
Kyle: 169 messages and 303 likes for a ratio of 1.79 likes per message
Mike: 45 messages and 183 likes for a ratio of 4.07 likes per message
Chuck: 107 messages and 170 likes for a ratio of 1.59 likes per message
Kevin: 263 messages and 465 likes for a ratio of 1.77 likes per message
Jack: 90 messages and 208 likes for a ratio of 2.31 likes per message
Luke: 114 messages and 209 likes for a ratio of 1.83 likes per message
Derek: 68 messages and 66 likes for a ratio of 0.97 likes per message
Gavin: 253 messages and 456 likes for a ratio of 1.8 likes per message
Total Likes Given
Roland: 217
Christian: 39
Matt: 382
Dan: 527
Kyle: 306
Mike: 224
Chuck: 71
Kevin: 246
Jack: 201
Luke: 307
Derek: 26
Gavin: 324
Self Likes Given
Matt: 1
Dan: 2
Chuck: 1
Kevin: 2
Gavin: 1
Average Likes Per Member Per Person - Who are you the toughest critic of?

(Number of times you like it vs number of posts)

TOTAL AVERAGE: 11.46% of posts

-- Mike - Liked 40.0%
-- Gavin - Liked 12.25%
-- Dan - Liked 10.75%
-- Kyle - Liked 11.24%
-- Kevin - Liked 14.45%
-- Christian - Liked 9.68%
-- Luke - Liked 10.53%
-- Chuck - Liked 8.41%
-- Jack - Liked 16.67%
-- Matt - Liked 4.93%
-- Derek - Liked 4.41%

TOTAL AVERAGE: 2.02% of posts

-- Dan - Liked 0.38%
-- Gavin - Liked 4.35%
-- Kevin - Liked 1.52%
-- Mike - Liked 8.89%
-- Kyle - Liked 3.55%
-- Luke - Liked 2.63%
-- Jack - Liked 1.11%
-- Chuck - Liked 2.8%
-- Matt - Liked 1.35%

TOTAL AVERAGE: 21.92% of posts

-- Dan - Liked 7.68%
-- Kyle - Liked 31.36%
-- Jack - Liked 41.11%
-- Luke - Liked 26.32%
-- Mike - Liked 55.56%
-- Gavin - Liked 28.06%
-- Christian - Liked 29.03%
-- Kevin - Liked 22.81%
-- Chuck - Liked 29.91%
-- Roland - Liked 11.11%
-- Derek - Liked 17.65%
-- Matt - Liked 0.45%

TOTAL AVERAGE: 36.47% of posts

-- Roland - Liked 25.0%
-- Kyle - Liked 38.46%
-- Gavin - Liked 39.92%
-- Luke - Liked 34.21%
-- Jack - Liked 43.33%
-- Mike - Liked 64.44%
-- Christian - Liked 16.13%
-- Matt - Liked 28.25%
-- Kevin - Liked 42.59%
-- Chuck - Liked 32.71%
-- Derek - Liked 23.53%
-- Dan - Liked 0.38%

TOTAL AVERAGE: 17.03% of posts

-- Roland - Liked 12.5%
-- Jack - Liked 32.22%
-- Mike - Liked 46.67%
-- Luke - Liked 21.93%
-- Gavin - Liked 22.13%
-- Kevin - Liked 22.81%
-- Matt - Liked 17.49%
-- Christian - Liked 22.58%
-- Chuck - Liked 20.56%
-- Dan - Liked 6.14%
-- Derek - Liked 5.88%

TOTAL AVERAGE: 11.66% of posts

-- Roland - Liked 9.72%
-- Kyle - Liked 19.53%
-- Dan - Liked 8.45%
-- Jack - Liked 7.78%
-- Gavin - Liked 10.67%
-- Luke - Liked 23.68%
-- Christian - Liked 25.81%
-- Matt - Liked 11.66%
-- Kevin - Liked 12.93%
-- Chuck - Liked 4.67%
-- Derek - Liked 5.88%

TOTAL AVERAGE: 3.82% of posts

-- Luke - Liked 4.39%
-- Christian - Liked 9.68%
-- Mike - Liked 24.44%
-- Dan - Liked 2.5%
-- Gavin - Liked 5.14%
-- Kevin - Liked 3.04%
-- Jack - Liked 4.44%
-- Kyle - Liked 2.37%
-- Matt - Liked 4.04%
-- Chuck - Liked 0.93%

TOTAL AVERAGE: 14.45% of posts

-- Roland - Liked 8.33%
-- Kyle - Liked 14.79%
-- Dan - Liked 7.1%
-- Gavin - Liked 21.74%
-- Mike - Liked 42.22%
-- Luke - Liked 23.68%
-- Matt - Liked 12.56%
-- Christian - Liked 6.45%
-- Jack - Liked 22.22%
-- Chuck - Liked 14.95%
-- Kevin - Liked 0.76%
-- Derek - Liked 10.29%

TOTAL AVERAGE: 10.71% of posts

-- Roland - Liked 11.11%
-- Mike - Liked 11.11%
-- Christian - Liked 19.35%
-- Gavin - Liked 17.79%
-- Luke - Liked 12.28%
-- Kyle - Liked 13.61%
-- Dan - Liked 5.37%
-- Matt - Liked 9.87%
-- Kevin - Liked 12.55%
-- Jack - Liked 4.44%
-- Chuck - Liked 8.41%
-- Derek - Liked 4.41%

TOTAL AVERAGE: 16.58% of posts

-- Roland - Liked 15.28%
-- Kyle - Liked 20.12%
-- Gavin - Liked 16.6%
-- Dan - Liked 8.06%
-- Mike - Liked 60.0%
-- Christian - Liked 35.48%
-- Kevin - Liked 22.05%
-- Matt - Liked 14.35%
-- Jack - Liked 23.33%
-- Chuck - Liked 17.76%
-- Derek - Liked 11.76%

TOTAL AVERAGE: 1.37% of posts

-- Kevin - Liked 1.14%
-- Dan - Liked 0.38%
-- Roland - Liked 2.78%
-- Matt - Liked 2.69%
-- Kyle - Liked 1.78%
-- Gavin - Liked 1.19%
-- Luke - Liked 3.51%
-- Jack - Liked 2.22%
-- Chuck - Liked 0.93%

TOTAL AVERAGE: 18.91% of posts

-- Roland - Liked 18.06%
-- Kyle - Liked 22.49%
-- Dan - Liked 13.05%
-- Jack - Liked 32.22%
-- Mike - Liked 53.33%
-- Christian - Liked 19.35%
-- Matt - Liked 18.39%
-- Kevin - Liked 20.15%
-- Luke - Liked 20.18%
-- Chuck - Liked 16.82%
-- Derek - Liked 13.24%
-- Gavin - Liked 0.4%

More details on the script can be found in my prior blog on it. Credit to the fantastic Circos for the Chord Chart.