2024-10-25

Playing with microphones

The latest LED board designs have included a TDK PDM I2S microphone - the idea was to make sound reactive LED strips.

It is tiny (3.5mm x 2.65mm x 0.98mm) and cheap(ish) $0.76. But it is digital.

Now, this was a can of worms for which I was ill prepared. I2S is not like I2C, it seems, and there are a number of ways of doing it.

This particular device is PDM based, so has just a clock and data line - the clock to the microphone, and data comes back. The concept is that you have two of these, one set for right and one set for left channel, on the same clock and data. The left and right are on the rising or falling edge of the clock. It is fixed 16 bit per channel, and works at a specific (wide) range of frequencies of clock and hence sample rates.

But other I2S devices work in a different way, with extra pins.

I went through some stages for this...

  • Simple analogue microphone (cheaper) - realised that was a daft idea.
  • This mic, set to right channel, and it did not work.
  • This mic, set to left channel, and it did not work.

What did not work was using WLED, a common LED driver package which works nicely on our LED modules. We realised it wanted I2S, and then realised it only did PDM on left channel, hence the different iterations, but no joy. Hopefully we can get some feedback in to that project so it does work as TDK is likely a good brand. It is amazing how sensitive it is.

My guess is it is not clocking at a rate the device will handle when using WLED, sadly.

So time to do it myself, as always!

I coded I2S as per ESP32 library, and, well, it worked, I got raw data. I could even tell it I want mono only and which channel. Amusingly using the wrong channel works too as the data floats for the other channel - though I picked up some high frequency noise that way. So best to set it correctly (I nearly said "right").

The next step was a simple FFT function - I have not gone for anything fancy or uber efficient, as it keeps up well enough, especially when I went for some simple oversampling on its input. I cannot clock the microphone as slow as I would like for normal working. But even at 30k samples/sec I could keep up with FFT 30 times a second, just, all done with floats. These ESP32s are impressive.

I have made the working audio range configurable, and averaged the FFT down to a small number of bins (24) which are log of frequency to fit better with musical scales (options for linear).

This has led to some simple audio responsive LED schemes - a simple brightness based audio spectograph - smoothing from 24 bins to however many LEDs you have in a chain. And also a simple overall volume based bar graph, and one that is RGB for three levels of sound.

Lessons learned.

Some automatic audio gain was pretty simple.

The next lesson was the log scale on frequency. That was not hard. But made an option.

I reduced the full FFT result to a small number of bands, 24.

Also I needed to make it do a peak level per band and damping (configurable) to give anything that looks nice.

The frequency range is configurable, but log based causes gaps if you go too low, so ended up 100Hz to 4kHz for now, log spread over the 24 bins from a sample rate at a typical 25 blocks/sec and actually sampling over 50-60ks/s down graded (average of oversample) to 12-15ks/s for the FFT giving results to 6-7.5kHz which is more than enough for the bands I am using.

I also ended up making the samples fit the LED update rate synchronously, typically 25 or 30 per second (configurable) to avoid any extra jitter/aliasing effects.

Oh, and when using RGBW, with a colour band for spectograph, before automatic gain kicks in you can have over 1 on a band, so I made that push in to turning on the W, which worked well.

The result

Update: I have now tweaked and got cleaner response across the range, putting 50Hz to 6.4kHz in to 42 log based frequency bins. See youtube for frequency tests.

Now for use with a live band at the pub!

And HIWTSI (my son's business) is doing LED system installs now.

2024-10-06

One Touch Switching

It has been some weeks since One Touch Switching was fully live.

TOTSCO say over 100,000 switch orders now, so it is making good progress, well, in principle.

In practice a lot is working, and in terms of volume, with the key players, as well as the likes of A&A, all working reasonably well now, switches are happening, both ways. We are seeing things working both ways and correct billing as well, which is good.

But there are still some challenges.

  • Whilst I cannot go in to details, even the big players are still facing some issues, mostly small issues but some bigger, and some with workarounds for now, and for which they are rolling out as updates. Daily calls continue with industry (yes, some I have taken from the pub).
  • A lot of smaller players are catching up, but many face the challenges of the huge holes in the specifications. These are still frozen and so grey areas, errors, and contradictions abound. Only once they are live with other ISPs are the issues apparent. In at least one case we have to ask how a CP managed to be live as they had some fundamental errors (like SOR not being a UUID) that should not have passed even the very poor testing TOTSCO claim to do. Even larger CPs are not entirely agreed on some field specifications because of poor wording in one of the better parts of the spec, but are working on it. The daily calls help, but only happen with the early/bigger CPs.
  • There is now a formal process for inter-CP communications to resolve such issues, but not everyone is on it yet. There is a fallback process as well. So things are happening, and some of these issues with smaller CPs are being fixed as well.

But even now, even this weekend, I have seen incorrect messages and errors. I have reported, of course, and it may be these end up as defects on the daily calls, we will see.

I worry what will happen when the daily calls stop - reporting an issue to a CP may mean they ignore it rather than spend resource investigating, fixing, and deploying. At present CPs have that resource assigned, but they will not forever.

What next? Well we keep at it.

The next big step I can only hope for is an unfrozen spec and a lot of clarifications and updates. It will be interesting to see how that process happens, and how we can be involved. I have a lot to say on clarifying the specifications and I hope I can be involved in making it happen. But every change will need a lot of agreement, and even some changes by all CPs in some cases. For now, there are some silly compromises like all strings max 256 characters (which resulted from a global update to a Swagger definition system, rather than any informed debate or formal specification change, and is annoying as tinytext in mariadb is 255 characters not 256). Even so, some agreement on even the vague magnitude of things like correlationID is a good start. I suspect, in practice, that one may get defined as smaller, like 64 characters. In hindsight it should have been a UUID, unique per message, but too late to do that now. The problem is the smaller/newer CPs are not in on that discussion, so don't know. Big CPs guessed at 36 characters (UUID size), 56 characters, 64 characters, and so on, as there simply was nothing in the spec, but most had to set something in their code. We changed our handling within the first few days as we understood how broken the spec was, and now handle any size (well, megabytes) but other CPs don't, and we have limits on a load of other strings anyway.

For now, I have every message we receive, and every message we send, run through my NOTSCO checker and reported to me. I feel it is only fair to test us as well. Over time it will only be problem messages that I need to monitor. It has actually highlighted some issues in what we were sending (where customers manually type an address, mainly - I have added more checks now). But monitoring real life message has also meant updates to our checking in the live system, and updates to my NOTSCO tools.

My latest changes include actually using the longer agreed size of correlationID to ensure we tag the message type as a (small) suffix on a UUID, so that we can quickly (pre-database connect/check) validate messages we get back are sensible and reject them. Why? Well one small CP is sending nonsensical replies to replies, or reusing correlationIDs from previous messages with different messages, both of which we can now pick up in milliseconds, and cleanly reject. It looks like they are working on it, but no actual communications back to us, which is a shame - we're happy to help and advise if only they would talk to us.

Overall - OTS is happening and mostly working, so do try it when you want to switch telco.

FB9000

I know techies follow this, so I thought it was worth posting and explaining... The FB9000 is the latest FireBrick. It is the "ISP...