Would you like to create a Wwise FX plugin and you have no audio programmer at hand? This approach could help you. This piece examines the niche Pure Data --> Wwise toolchain developed and made public by Enzien a couple years ago. It’s an smart and well crafted tool that delivers what promises.

Also, go out there and find an audio programmer.

The Pure Data - Heavy - Visual Studio - Wwise chain

If you look around you’ll find a number of articles that describe this approach in more o less detail:

I can confirm the process works and it’s possible to port it to Python3 (see below) if you need.

Without any build and tooling support the process looks like this:

  1. An audio designer / tech audio guy builds a pure data patch.
  2. The hvcc chain parses the .pd file and generates an intermediate heavy representation.
  3. The tool chain wraps that intermediate representation in heavy.
  4. Depending on the generator you’ve selected (Unity, Wwise, VST, …) a template is selected.
  5. Then it combines the intermediate representation and the template and generates your target.
  6. Using Visual Studio or XCode or whatever you compile your target.
  7. The resulting objects must be copied to Wwise’s plugins directory.
  8. Open Wwise.
  9. Include the plugin (either source or FX) somewhere in the structure.
  10. Hook the Syncs.
  11. … Go to 1 to iterate in the plugin if polishing is required.

This process is something a programmer is more o less used to do, albeit begrudgingly. But I would need an extremely motivated audio designer to go through this steps and not getting a riot in the process.

Enzien (see below) explored a solution where you could upload your patch to a website and it returned the compiled artifact. That reduces the friction, somewhat. And that’s perhaps something you could deploy in your company. But, realistically, how often is this chain going to be used? If your goal is to generate Sources / FX for Wwise I have some trouble finding the ROI. Perhaps I’m missing something.

AudioKinetic has its own templating tools: wp

The tool chain described above makes sense when you’re targeting a number of different systems. But if your goal is to cover only Wwise, Audiokinetic has their own toolset in place: Plugin tools. You can see them in action in this video:

I wonder if it’d make more sense to expand Wwise’s tools directly, perhaps hooking the hvcc compiler inside it somehow. What’s clear is that the Visual Studio template included in the sources is out of date and should be updated to VS2019. As by today retargeting the solution will do the trick.

Enzien Audio

The patch compiler used in this toolchain was developed by Enzien Audio. As far as I can tell the company closed a couple years ago but they uploaded parts of their tech stack to enzienaudio github. I’ve been mainly looking into the patch compiler hvcc, it’s a smart PD / Max patch to code compiler (transpiler? something piler for sure) It’s interesting to mention here that hvcc can generate outputs for unity, VST or web-audio among others.

Modernizing hvcc to Python3

Unfortunately the code in the repo is written in Python 2.7 and I’m trying to keep my codebases in Python3. Since this was my first time trying to do this I took a look around:

Python-Modernize and a bit of wiggly-waggly with encodings did the trick. But if you decide to take this route please keep in mind that the first thing I did was to reduce the scope of the tool to my precise use case: Pure Data –> Wwise plugin. Making the code to transform way smaller and easier to handle.

About Pure Data

I think we all agree if I write that vanilla Pure Data evokes the worst of soviet brutalism. Jagged lines, spartan black and white, mysterious words, tildes everywhere and that distinct TCL tint. Don’t panic, it’s going to be all right.

At the same time is one fascinating piece of multimedia software. Probably the closest you can get to the metal if you want to use a computer and stay away from C++. The community has been there since for ever, the resources are abundant and it’s extremely well documented. It’s so alive that other projects like Purr Data are trying to bring the user experience to this century.

There are many tutorials freely available in YT that covers pretty much everything, synthesis, fx or video:

  • Lawrence Moore has 2 full courses uploaded ~ 2016. The material is instructive but can be extremely dry.
  • Really Useful Plugins has bite sized techniques. These videos are concise to a blink-and-miss-it degree. Quite fun to follow along.
  • GEM video generation Because, of course, PD can generate video too.

The original Pure Data was developed by Miller Puckette and, if you’re interested in the theory behind electronic music, he’s kindly published his book in HTML form

In short if you’re interested in learning more about this power tool the community has your back.

Conclusions

To me, if you’re interested in working with audio and you’re technically inclined, learning the basics of Pure Data is a reasonable investment. Regarding the workflow described here and as it is right now I can’t see it working at any scale. Unless I’m missing something, it seems like a neat trick, a bit gimmicky even. It’d need a quite a lot of work to become production ready. Not to mention that most probably if you took care of exposing this mechanism to your company expecting the audio designers to use it you’ll probably end building the patches yourself.

Bellido out, good hunt out there!

/jcb

Comment and share

What to do when you want to distribute a python solution through pip but you only have a Subversion server? You can turn your code into a package and ask pi to kindly use your svn server as a trusted source. This text describes a way of doing exactly that with minimal configuration and avoid bothering your busy build engineers.

This piece covers how to do the packaging manually. cookiecutter would be another option but seems overkill for what I want to do. The only dependency of note is a web-browsable Subversion repository or any index based web server.

Why using packaging internal use tooling?

If you’re extremely lucky all your code executes on libraries contained in the base Python distro. Congratulations. You can distribute your solution by email if you want. But perhaps you want to be able to keep some form of versioning, or expose sensible entry points, among other things.

I arrived a this problem while developing an internal tool for a team of sound designers working on Wwise. I was virtualenv-ing my way around the development but after a couple dependency installs I started thinking about distribution. I considered the classic requirements.txt included in the sources and ask the guys to pip install -r requirements.txt but somehow that solution feels like it belongs more to a CI/CD enviroment than to end-user distribution. Not to mention that you’re asking your end users to sync your sources and perhaps you don’t want that.

Then there’s the problem of executing the tool itself. There is a difference between:

1
python cli_amazing_tool -a foo -b bar -c aux

and

1
cli_amazing_tool.exe -a foo -b bar -c aux

And I had the added problem that my solution was bound to a specific version of an internal library, also written in Python. That library was under heavy development and mantaining matching version was fundamental for my sanity.

Python’s packaging system can take care of all this with ease. With just one file.

Setup.py: configuring a Python package

First things first, the documentation for the setuptools is here. If you skim the documentation for the good stuff you’ll see a couple of almost ready-to-be-used configurations.

The content of an extremely basic setup.py file could look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
from setuptools import setup, find_packages
setup(
name="cli_amazing_tool",
version="1.2.3",
packages=find_packages(),

entry_points={
"console_scripts" : [
"amazing_tool = cli_amazing_tool.main:main"
],
},

install_requires=["waapi-client==0.3b1"],
author="jcbellido",
author_email="jcbellido@jcbellido.info",
description="A waapi-client based tool",
keywords="wwise WAMP waapi-client",
project_urls={
"Documentation": "http://confluence.jcbellido.info/display/DOCS/cli+amazing+tool",
"Source Code": "https://your.svn.server.net/svn/trunk/sources/cli-amazing-tool",
},
)

As you can imagine packaging is a big problem, that’s why we have build and release teams. But in the case of the lonely developer with a shoestring budget this approach can do perfectly. There’s a couple tricks in the previous configuration:

  • install_requires: This is the key feature for me. pip will take care of the package dependencies through this list.
  • find_packages=find_packages(): this is the auto mode for setuptools packaging. As far as I understand it, it acts as a crawler and adds every package (ie: anything with an __init__.py) to the final .tar.gz. In my case this includes the tests but honestly I prefer it that way. Has been useful a couple times.
  • entry_points: When defined pip will create .exe wrappers for your packages. This example is overly simplistic. It should be trivial to create meta-packages that expose a suite of related commands.

Package Generation

Once your setup file is ready, from the project root:

1
python setup.py sdist

This command will take the package definition contained in setup.py and pack everything under a tar.gz file. In this case, something like cli_amazing_tool-1.2.3.tar.gz that’s the file you must push to your repository.

Something that I obvserved is that the command complains about a weird dependency after a change to setup.py. Before worrying, delete the .egg-info directory and reexecute your setup.py, it worked for me pretty much every time.

Installing on user machines

Once your packages are submitted to your repository and if you’re lucky, your IT department would have pre-installed Python in your users’ machines. If that’s not the case you can always install Chocolatey and ask the guys to install the dependencies themselves, actually I tend to prefer this way. This opens the door to even more control on the execution environment of your solutions but it’s not the point of this text.

Once the interpreter is installed you just need them to execute something like:

1
pip install cli_amazing_tool==1.2.3 --trusted-host your.svn.server.net -f http://your.svn.server.net/svn/packages/something/cli-amazing-tool

… a command that can live perfectly in a powershell script.

If you pay attention you’ll see --trusted-host your.svn.server.net this could help you if you don’t want to use HTTPS, perhaps your local svn server ain’t configured to use it. Perhaps you don’t want to hustle with server certificates. It’s an option. Not recommended but useful.
The -f option just adds a new source to pip.

Profit

Once the first loop is done and your users can painlessly install and update their tools you’d have reached a form of parity with more compily languages. Having your code contained as a package will help you if you decide to go CI and it simply makes things clearer in the long run.

For me there’s one more step to take, though. The full packaging: every dependency included in a single redistributable file. I read about a couple options like shiv that seems to do what I need. But that’s material for another text.

Bellido out, good hunt out there!

/jcb

Comment and share

Maybe you heard about Outer Wilds from Mobius. Perhaps you saw an article somewhere about it. I didn’t. It just popped in Game Pass one day. I downloaded it believing that I was about to play the new Obsidian’s title: The Outer Worlds. What a surprise. From time to time a title appears that makes me remember why I love videogames.

The premise: You’re an astronaut. Your mission to explore the solar system.

At the begining of “Outer Wilds” everything feels like a toy. A space program with 4 astronauts and a couple technicians. Your ship is made of wood and tomato can grade aluminium. For guide and orientation, a radio akin to a Smell-O-Scope. And a tiny solar system with a handful of planets ready for exploration, waiting for you. But before takeoff two interesting things happen: a psychotropic encounter with the remnants of an old alien civilization and, even more important, you get an alien-to-English translator.

Only that translator is the core of several science fiction works. It’s an intriguing thought exercise: what if we meet peaceful aliens but we’re unable to communicate with them? Stanislav Lem’s Solaris) explores it. Or more recently and directed by Denis Villeneuve, Arrival depicts the massive endeavor that would be to communicate with a truly alien life form. It’s such a common theme that appears even in pop literature such as Warhammer’s Horus Heresy).

But Outer Wilds is not hard sci-fi. Without any guidance you simply pick a planet at random and thrust your way there. Protected by your space suit made of leather, a fish bowl and some judiciously applied duct tape.

My first expedition

With little motivation beyond: go fly and explore, maybe look for the stranded astronauts, I venture forward. The soundtrack is playful and feels like family campings and mellow hillocks. The navigation feels clunky at first, I pick a destination at random, Brittle Hollow, and I’m on my way. Distances and sizes in Outer Wilds are minimal, everything is compact. I reach the place quickly, almost too much. When I arrive to the planet surface I’m greeted by a desolate plain of ash and rock and I notice an angry moon that spits magma rocks that fall around me. Exploring the surface I find some ruins and some alien texts that I can read using my auto-translator and soon an entry to what I think it’s a cavern. It’s not. It’s a full alien city. I’m still an amateur astronaut and I botch what it looks like an easy jump, I’m falling to the planet core, except there’s no core, it’s a black hole and I’m transported somewhere else. The Sun fill the whole screen and I’m floating stranded in space, just another satellite orbiting it.
I don’t believe Outer Wilds is trying to be a terror game but I’m scared. I’m expecting the universe to behave in a certain way but this one doesn’t. The music has changed and now it’s closer to Jerry Goldsmith’s Alien.

Everything ends with a flash of light and I wake again, I’m at the begining of the game. What was that light? Where does it comes from?

Spirals and Fragments

This game loops over itself. The character dies a hundred different ways: burnt by exotic matter, squashed by a tornado, landing too fast or getting devoured by a space fish monster. After every death you wake again, by the launching pad bonfire, ready to roast a marshmallow and liftoff.
Beyond the exploration of the system and the challenge of navigate the space ship, you try to understand what’s going on. Who were the super-advanced aliens that inhabited the system prior to your people and more importantly where did they went or why they disappeared. But that search is limitted to 20 minutes period. The Sun will explode and everything will be reset.

The fragmented narrative

The UI element in the ship … one of the best graphical representation of how knowledge is formed, and somehow connected with Dark Souls approach.

Annapurna

Outer Wilds was published by Annapurna interactive. Publishing videogames and profit on them it’s not an easy bussiness. But it seems like this American company has been part of some of the most interesting (to me) games released recently. What Remains of Edith Finch by Giant Sparrow is a game with powerful family moments that remind me to Gabriel García Márquez novels. It’s so short and packed with so many great moments that it’s difficult not to recommend it.

I wish the best to Annapurna. I believe they’re doing something good for the videogame as a medium. And I’ll look forward for the next projects they’re involved with.

Comment and share

During the last months I’ve been involved in an infrastructure project. The idea is to offer on-demand resources. Think Jenkins or GitLab or any render queue. In my case, users are working from different countries and time zones. This is one of the cases where building a web-based front end makes sense.
The challenge: I’ve never built anything mid sized on web, only micro solutions that needed close to zero maintenance and were extremely short-lived. To make things more interesting the backend was offering its services through gRPC.

A note for other tool programmers

This is a piece about my second project using react. The first one, even if functional, was a total mess. I’m not suggesting the approach contained here makes sense to everyone but it has worked for me and I think keeping it documented has value.

The main issue with web-stuff for me is the amount of thingies you need to juggle to build a solution. To name just a few, this project contains: javascript, react, Babel, JSX, gRPC, Docker, Python, CSS, redux and nginx. It’s surprisingly simple to drown in all that stack.

Starting: react-admin + tooling

I needed an IDE for Javascript and I didn’t want to consume any license from the web team. So I started with Visual Studio Code. Coming from an overbloated VS Pro the difference in speed and responsiveness is remarkable. Adding the javascript support was also quite simple using a Code plugin. Below it, I had a common npm + node installation. For heavier environments Jet brain’s WebStorm IDE is what the professionals around me are using more frequently.

From that point a simple:

1
2
3
npm install -g create-react-app
npm install react-admin
create-react-app my-lovely-stuff

will get you started. You can see a demo of react-admin from marmelab team here:

With all that in place, how to start? After checking with more experienced full-time web devs they recommended me to use react-admin (RA from now) as a starting point. Later I realized how much RA’s architecture will impact the rest of the solution. But as a starting point it is great. The documentation is really good, I learnt a lot from it. From the get go you’ll have a framework where it’s easy to:

  1. List, Show detail, Edit and delete flows
  2. Pagination
  3. Filtering results
  4. Actions in multiple selected resources
  5. Related resources and references, aka: this object references that other thing make navigation between resources, simple.

Half way during the development I found out about react-hooks. I strongly suggest to watch this video, well worth the time I put into it:

I used only only a fraction of the potential Hooks offer and that was more than enough. The resulting code is leaner and more expressive. If I need to write another web using react I’ll try to squeeze more from them.

RA is based on a large number of 3rd party libraries. For me the most important 2 are:

  1. React-Redux: I use it mainly in forms and to control side effects. Some of the forms I have in place are quite dense and interdependent.
  2. Material-ui: Controls, layout and styles. According to what I’m seeing around lately it has become an industry standard. Out of the box is going to give you a Google-y look and feel.

Unless you’re planning to become a full time web developer I don’t believe it’s particularly useful to dig too deep into those two monsters of libraries. But having a shallow knowledge of the intent of the libraries can be quite useful.

gRPC in the browser: Envoy + Docker

The backend was serving its data through a gRPC end point and was being built at the same time I was working on the frontend. One of the main concepts of gRPC is the .proto file contract. It defines the API surface and the messages that will travel through it. Google et. al. have released several libraries to consume gRPCs (based around that .proto specification) in many different programming languages including Javascript, .NetCore or Python.

But the trick here is that you can’t directly connect to a gRPC backend from the browser. In the documentation, Envoy is used to bridge those. In other scenarios it’s possible to use Ambassador if your infrastructure supports it.

Since the backend was under construction I decided to write a little mock based on the .proto file using Python. Starting with the .proto file I’m returning the messages populated with fake but not random data. The messages are built dynamically through reflection from the grpc-python toolset output. The only manual work needed is to write the rpcs entry points than are automatically forwarded and answered by the mock.
Once the fake server is written you still need to make it reachable from the web browser. It’s here where docker-compose made my life way simpler. I wrote a compose with envoy and my server connected and I had a reliable source of sample data to develop the UI. In this case I was lucky since my office computer is running on a Pro version of Win10 making Hyper-V available and the Docker toolset for Windows machines have improved a lot lately.
It’s perfectly possible to achieve similar results using non-pro versions of Windows or even simpler by using a Linux or Mac desktop.

This small solution turned to be quite important down the line given the amount of iteration the backend went through. In the web world there’re many great API / backend mocking solutions based on REST calls. But when you’re working with gRPC the ecosystem is not as rich (or I didn’t found anything mature at that moment)

Other lessons

One of the interesting side effects of using RA is the impact of the dataProviders abstraction. The whole architecture orbits around classic HTTP verbs. At the end most of my code beyond some specific layouting and extra forms was pure glue. I have full translation layers in place: from gRPC into Javascript objects and vice versa.

In my domain and due to API restrictions I was getting different categories of resources through the same gRPC points. After thinking a bit about it the simplest solution I found was to implement pre-filtered data providers and give them resource relevant names. In other words I ended with a collection of data providers that were internally pointing at the same gRPCs but with relevant names. This allowed me to offer meaningful routes while keeping the UI code isolated from the backend design.

Containers, Docker in my case, are becoming more and more important as I go forward. If you know nothing about them I strongly suggest you to put some time in them. It can be a game changer. Even if your intent is to keep your dev environment as clean as humanly possible.

Comment and share

DICE’s summer party

Following a well stablished tradition, DICE celebrated the arrival of summer organizing a great party. They rented a great place the House under the bridge. Built under a tall highway bridge over the lakes with nice and informal environment.

This party remind me to the ones arranged by EA Madrid’s team. Colleagues formed bands and performed for everyone. Was good fun, including arts and crafts. Had a really nice time.

Meeting old friends

It’s a busy summer visit-wise. We reunited with old colleagues and went everywhere around town. We covered the mandatory visits and then some uncommon corners here and there.

And, on top of everything we had the chance of hangout with this german hunk. Lovely dude.

Something I never thought I’d do was to visit Skansen during Midsommar. I particularly enjoy Swedish traditional songs. And yes, we danced like little green frogs.

Improving life a little

The last months we put a lot of effort into improving our apartment. We renovated the place and started buying new and hopefully better appliances. Let me introduce you to our new vacuum cleaner!

s a vacuum cleaner.

A lovely bag-less machine that’s able to deal with cat hair and looks a little bit like an Autobot. While we were looking for models to buy we decided to check youtube for suggestions. We discovered that there’s a ring of Scandinavian youtubers that compare models and do all sort of field tests on these machines. It was fascinating. Never thought anyone could get so excited about cleaning carpets.

And one last thing. We went to a live recording of No Such Thing as a Fish.

A comedy podcast around trivia and curious facts. The podcast is funny and I recommend it quite often.

Cooking, expanded

During our time in Poland I discovered Vindaloo and truly liked how violent that dish could be. But when reading about it in more detail I discovered that it’s not supposed to be poison. It’s supposed to be vinegary. So I tried my hand at cooking it:

Some lessons learnt: careful with the veggies or you’ll end with a soup. A tasty one but that’s not how the dish is supposed to go. Also, sweet tamarind is not the same as cooking tamarind, it was my first time trying this, it’ll get better next time.

Some weeks ago we were lucky enough to get invited to a nice Spanish get together. Since we’re that fancy we brought some cinnamon buns and some traditional pickled herring.

The recipe couldn’t be simpler, even if you start from raw fish. I discovered later that not everyone loves pickled herring, it’s almost like almost no one does. If you look with attention you’ll see the cinnamon rolls just before baking.

Also I decided to buy a crockpot to my parents. Quite a normal one, but it seems that it’s a hit these days. Makes their days simpler.

And one last thing! A big grocery store opened very close to our place. It seems their plan is to specialize in imported foods and they have a Polish section. We were missing the Polish goodis so much.

If I have a recipe pending, that I want to master, that’s Bigos. A Polish dietary nuclear bomb. In other words: it’s phenomenal. Don’t get intimidated with the different meats you need for it, just follow Cheff John’s advice:

That happens to be one of the best YT cooking channels I know about.


My intention is to write a more technical entry … thoon.

Comment and share

I had a handful of pretty busy months. For starters we’ve returned to Sweden. Back in the mother land. Here you can see us mingling with the locals in the faithful Corner. In any case nothing will ever eclipse the glory of the Sports Bar back in Warsaw.

Our timing was perfect and I rejoined DICE during the final dev months of Battlefield V It’s a gorgeus game. I’m truly looking forward to try with some peers back at home.

Swedish things

Due an strange planetary alignment we had a number of super traditional Swedish events. I went to my first crab fish event. Including silly hats and duck face.

And a couple weeks later, we attended a wedding. The venue was at the shore of a beautiful lake and we had a terrific time.

Everybody had a blast and we danced to a couple ABBA songs too many. The Swedes have it in them.

New adventures in cooking

During our time in Warsaw I grew fond of YT’s cooking shows. And thanks to Mr. Sexy-Lips Adamo I discovered “the hot ones.” A pretty entertaining show that woke up my interest in spicy foods and sauces.

While I was walking around the Old Town I found a little British food store that has an extremely promissing collection of mean-spirited sauces:

… needless to say we’ve stocked quite heavily.

One of the biggest cooking discoveries (for me) that I did lately is that you can bake an omelette, so I’ve been experimenting with this approach a little bit. For instance here you have a pic of the super-meaty minced meat + bacon approach:

The lady has a collection of swedish cooking books dating to the mid 70s. One of the books is made of solid gold: Recipes from Swedish old days

where I found out a lacquered goose recipe that I truly want to try. I might give it a go if the lovely dudes from CDP decide to pay a visit to the far North.

Comment and share

During the last 16 / 18 months I’ve been working primarily with Microsoft technologies on the Desktop. A big lump of: WPF + OpenXML + Entity Framework. In other words: big stacks, massive code bases and tons of hours trying to understand what is going on under every:

1
2
3
using( var context = new DbContext() ) { 
var stuff = await context.Thangs.Where( w => w.Foobar < 3 ).ToListAsync();
...

.. block in my code.

I felt a little bit saturated. I wanted a project on the side, something interactive. And that’s how I found godot an open source game engine, an all-in-one package.

Getting engine + tooling

This game engine was born around 2007 and it’s been in development since them. The project got a MIT license at the begining of 2014. The mainline today is on the 3.0.5 version and yes, there’re versions for Mac + Linux. And just to make things even simpler, you can fetch a precompiled godot from Steam. It doesn’t get simpler than that.

It’s also possible to build the engine, that includes the tooling, from code, even though it’s not the simplest distribution system I’ve seen. The “Compiling” documentation includes several step by step guides that worked well for me.

If you’re working under Windows you’ll notice that he size of the .exe is around 20MBs. That’s all, that includes both the environment and the runtime. The editor, opened looks like this:

If you’re interested in testing the game in the image, you can try to play it in a browser

As usual if you’re planning on releasing in different targets, like iOS or Android, you’ll need the SDK and the size may vary. At the moment there’s no official support for consoles.

Learning Godot engine

An interesting way of approaching this technology, is to check some projects. Luckily there was a game jam hosted in Itch.io: godot temperature game jam quite recently and the projects submitted are interesting to play and check. It’s possible to download the sources and build the games by yourself, most of the titles I checked host the sources in github.

Godot architecture and code base makes it well suited for teaching and starting in gamedev. It’s possible to devevelop new behaviors using the internal language GDScript.

It’s also relatively simple to find YT playlists covering the basics of the engine, one example, found in Game From Scratch’s YT channel, could be this one: Godot 3 Tutorial Series

I know there’re a number of online courses, in the shapes of Patreon’s + Online Uni’s, etc. But I don’t know enough about those to have a clear opinion or.

Meanwhile, in the world

And now for something completely different: while I was deep inside one of Microsoft’s tech stack the guys’ve been busy and we have new nice and neat toys:

Blender is looking better than ever and it’s approaching 2.8 at the wooping speed of a second per second. Perhaps this video could help you catch up:

.. fantastic work.

Cyberpunk 2077 has a new trailer after years of silence. There’s quite a lot to write about CD-Projekt, timing, marketing, and whatnot.

.. but for now, it’s enough to say that I might have some part on the behind closed doors demo in 2018’s E3.

Battlefield V seems to be, somehow, advancing in time and the team travelled from WWI, into WWI + 1, or, in a trailer:

.. which, as usual, looks espectacular.

Comment and share

During the last weeks I’ve got the request to write some documentation of the localization tech stack I’ve been working during the last 18-ish months. In the team I’m working with nowadays, there’s a group of specialized documentation writers. Tech writers.

And when you check the docs they create, it’s clear they’re professional. Unified styles, neutral English, linked documents, different sorts of media including images, gifs, videos, links to code, examples in the game … everything you can imagine. It looks and it is costly.

And that works well for teams of some size. Let’s say sizes over one person. I’ve been driving aboslutely every aspect of the stack by myself: DBs / Caching / Services / UI / Exchange formats. On two very different projects at the same time. Starting from scratch. It’s been a blast. But it’s a messy blast.

How it should look, for me

When consuming documentation I want 2 sources of information:

  1. As a final user of the stack. What does the user see? How does the UI work? Which are the metaphores deployed?
  2. High level architectural view of the code base. Server based? Service based? Local user only?

… and, once what the intent is clear and the language with the user base is defined, then, if possible, show me some unit cases. Nothing fancy or spectacular something to start tweaking here and there.

That would be the gold standard.

Then, obviously it’s better when the code is not rotten. But that’s a daily fight. And a different discussion.

So what’s next?

Umh, after the E3 mayhem, maybe I’ll be able to convince some producer to redirect the work of some peers at QA to work with me for a couple weeks, and we’ll go together through all the insane nooks and crannies that one-man-operations tend to generate at these scales. If I’m lucky this person will be able to create some end user documentation and we’ll discover some easy points for improvement.

Meanwhile, obviously, I have even more stuff to develop, including a nasty data migration, related with a deep change in our domain.

Oh, the good ol´times when I believed that running Doxygen and flee was enough.

Comment and share

I was worried about the performance of our Database Servers. Our access patterns are mostly read-only, so why not cache the data we need in an intermediate server? Redis appears to be a good solution.

Too many readers, few writers

From a data life cycle point of view, my current domain has the following characteristics:

  1. It evolves by big chunks and the number of users allowed to make changes on it is very limited.
  2. There are hundreds of concurrent users on read-mode.
  3. It’s not mission critical for the consumers of the data to be perfectly up to date. They can wait some minutes.
  4. My budget is close to nil.

I didn’t want to route the readers of the data to the main DBs. That’d create the perfect bottleneck. And I’ve been looking into caching all that information, in memory, for a couple weeks.

Theres quite a lot of solutions out there. Microsoft has a couple: Velocity or AppFabric Cache. But in the Linux world there are way more options. But at the begining I was lazy and silly and I wanted a full Windows stack.

First approach: memcached

Memcached is one of the veteran solutions in this endeavor. It’s incredibly stable and Facebook (among many other) has been mantaining it for quite a long time. Here you have a chat by the man himself.

It’s pretty rare to have scaling problems that compare to FB’s. So I decided to take a look. There are at least 2 major versions of this solution that are precompiled for Win32. They work. But everybody agrees that the performance it’s not the same.

VirtualBox + Debian 9

Once I admitted that I should host my services on Linux I went for one of the virtualization solutions I know, VirtualBox, and I noticeed with glee that it was possible to install Debian. That was my first Linux distro. Feels a bit like returning home.

Then it was a matter of apt-getting make, gcc, vim, terminator, etc.

And I went to fetch Memcached sources. But, on my way there, I thought that since I had a “full fledged” Linux, why not checking around a little. And then Redis happened.
On paper, Redis’ features are a super set of Memcached. So I decided to give it a go. With a Linux in place, it was painless 4 steps to build from sources.

Then you end with something like this:

The base sources includes the tooling of the DB. Which is super nice.

C# + Redis: a lot of “Stacks”

Since I wanted a fast start on all this Redis biz. I checked in PluralSight for a fast start. That, in hindsight, was a bit of a mistake. Redis has a great amount of material in youtube, they even have a conference.

My first approach, was to write something in C# to feed a RedisDB. Following the advice from the PS Course I opted for ServiceStack.Redis and it works very well. Except for one detail. My budget for all this is exactly zero dolars, and servicestack is clear regarding its pricing Needless to say I reached the starter limits in exactly one hour.
All that was, clearly, my bad. I should’ve read the services better.

Thankfully there is a good list of other C# Clients and I decided to take a look into StackExchange.Redis. Yup I know the names are super confusing. But that’s what happened. Combine that with some fever and you have a glorious headache just waiting for you.

The code itself it’s reasonably clear, in a somewhat “unittest” format:

1
2
3
4
5
6
7
8
const string redisConnectionString = "YourServerIP:6379,allowAdmin=true";
ConnectionMultiplexer cm = ConnectionMultiplexer.Connect( redisConnectionString );
IDatabase db = cm.GetDatabase();
Assert.IsNotNull( db );
string value = "abdcdfge";
db.StringSet( "myKey", value );
string recovered = db.StringGet( "myKey" );
Assert.IsTrue( value.Equals( recovered ) );

With this library in place, projecting my data in a Redis-Friendly format is just a matter of wiggly Linq enough.

Consuming the cache from C++

Unfortunately the vast majority of the consumers of my domain work over C++ stacks. So there was the problem of finding a library that could communicate with the database with the minimum number of dependencies. I believe I have a good candidate here: cpp_redis

Painless to compile and try. But it’s still too soon to have a full formed opinion about it. I might post somehting more down the line.

Some lessons learned

  1. First and foremost you should check your tweets three times before clicking “send”. I wrote “StackExchange” when I wanted to say “ServiceStack” and all hell broke loose.
  2. Redis is part of the NoSQL family of DBs. No schema enforced. That gives you a lot of opportunities. But it puts a lot of pressure on the main keys.
  3. This DB supports several data types as primitives: sets, lists, hash, … the natural candidate to persist objects seems to be Hash, but I need to dig deeper on all this.
  4. The subscribe commands are incredibly powerful. Just for those alone, Redis is worth your time.
  5. This DB supports Lua on server side. And who doesn’t love Lua, right?

Comment and share

jc_bellido

My name is Carlos Bellido and I work coding games in Stockholm. I rediscovered swimming and gymns after moving to Sweden. Keep in mind that Kalles Kaviar is an an acquired taste.


I work in the audio department in FatShark