Monday, October 5, 2020

Super-Smelter and Furnace Array Designs

For the last few days I've been looking at various super-smelter designs, as I want to build something that's higher throughput than the current double-furnace design that I currently use.

I've been using automated smelters for years, but only recently started looking at increasing their throughput.  A double-smelter was quite the step up...

Moving to larger setups means more-stuff-faster (but more complications):

  • A single furnace cooks a stack in 640 seconds...  nearly 11 minutes.
  • a double-smelter is, amazingly, half that.  320 seconds.  5 minutes is still a long time
  • 4x gives a stack in 160 seconds (still about 3 minutes...)
  • 8x gives a stack in 80 seconds (now we're cooking)
  • 16x gives a stack in 40 seconds
  • etc.
In looking at the different designs, and scales, especially as they grow in size, I found some design issues and some pros/cons.

All of them are expensive, though.  Starting at 8x, every design needs at least 5 hoppers per furnace (25 iron ingots), and that gets expensive in a hurry.  Especially as I don't build iron farms for ethical reasons.

Here's a rundown of the various designs that I've found, some of the issues that I've seen with them, and the general expense of building them.  Of what follows, nothing is really my own design, but this is a reference to others' designs (with credits).

Double-Smelter

image from Mumbo Jumbo:
https://youtu.be/yyyhxRztamE?t=23

It's cheap, it works, it handles multiple stacks of input, and it generally doesn't give any issues, and with the addition of a couple levers, still gives access to the XP that the furnaces have been banking up while they're auto-smelting.

The design is simple.  Double-chests for input, fuel, and output, with hoppers in between.  Just like a standard single-smelter design, just two of them side-by-side, and the double-chests allow the hoppers to both pull input and fuel items at the same time, and to push into the same output chest.

But, it's still pretty slow.  At 10sec for 2 items, a whole stack of 64 anything takes 320 seconds to smelt/cook.

Rate:  1 stack every 320 seconds, or 720 items/hour

Materials cost:

  • iron: 30 ingots
  • cobblestone: 16
  • planks: 96

Quad-Smelter

Going from 2 to 4 starts to get complicated, because it's not as easy as putting 4 hoppers under a chest.  Only 2 fit.  And doing a 1->2->4 split with hoppers and double chests doesn't work, either, because for each of the pairs of hoppers pulling from the intermediate chest, one will always check before the other (which one is dependent on the order they're placed down, and the order they're loaded when the chunk is next loaded).

So you get this:

and only two furnaces ever get items.

But if the intermediate chests and hoppers are replaced with hopper minecarts, nudged off to the side so that they are over both hoppers, everything works nicely:


Place rails on the center two hoppers, walls on the outer two, and then the minecarts on the rails.  Nudge the minecarts to the end of the rails, against the walls, and then break the rails.  

There are ways to fix the hoppers, but honestly, the hopper minecart solution is better, so long as you can make sure that nothing "falls into" the hopper minecarts.  You'll want to make sure this is well-covered and protected from mobs.

Rate:  1 stack in 160 seconds, or 1440 items/hour

Materials Cost:
  • iron: 80 ingots
  • cobblestone: 32
  • planks: 160 (2S+32)

4x Array with hopper pipe

Another way of building a 4-furnace array is using a hopper-pipe.  This is the basic idea used  for much larger setups, but due to how it works, I think it's overly expensive at this scale.

This is the simplest of the silent arrays, Mumbo has a video here.


It uses a comparator and redstone torch inverter to lock a lower set of hoppers while an upper set forms a hopper pipe that pushes items horizontally over the furnaces.  When the horizontal pipe has an item reach the last hopper, the lower set is unlocked, and the items are pulled down.  And on the next cycle, they're pushed into the furnaces.

But that means that the last 4 items put into the system get stuck.  Not a problem if you're building this for a dedicated farm, but it means if you put in a stack, you'll only get 60 items out (the first time).

The solution is another set of hoppers, which pull from the unlocked hoppers before they relock, and then push into the furnaces:

But...  more hoppers, so more expensive.  This can be scaled up to 6 furnaces before it starts to run into problems that require more interesting redstone.

Rate:  1 stack in 160 seconds, or 1440 items/hour

Materials Cost (without the extra row of hoppers on input/fuel):
  • iron: 100 ingots
  • cobblestone: 32
  • planks: 208 (3S+16)
Materials Cost (with the extra row of hoppers on input/fuel so items don't get stuck):
  • iron: 140 ingots
  • cobblestone: 32
  • planks: 272 (4S+16)

8x Array using hopper minecarts

The hopper minecart design, however, can be expanded up to 8 furnaces, by some mirroring and adding some width to it, as ilmango shows in this video.


That's simple, but it needs some work to feed it fuel:



Getting the hopper minecarts into place is tricky (although this shows a way to do it without pistons).

Rate: 1 stack in 80 seconds, 2880 items/hour (45 stacks/hour)

Materials Cost:
  • iron: 250 ingots
  • cobblestone: 64
  • planks: 416 (6S+32)  (chests can be substituted for the droppers shown above)

8x Array

At 8x the array is still more expensive than the hopper minecart mechanism, but might fit a build better.


The tricky part is that the redstone gets more complicated, because otherwise a furnace gets skipped due to timing interactions between the unlocking and the hoppers being evaluated in a tick.  The solution for that is locking the upper bank and adding delays along the way, ilmango talks through it in this video and this video.

Rate: 1 stack in 80 seconds, 2880 items/hour (45 stacks/hour)

Materials Cost:
  • iron: 290 ingots
  • cobblestone: 64
  • planks: 512 (8S)

Longer Arrays

At really big arrays, the limit is ~23 furnaces, because of the time it takes the hopper pipe to move items all the way to end equals the 10 second cook time (for smokers and blast furnaces, which only take 5 seconds, it's half that).



At 16 furnaces:

Rate: 1 stack in 40 seconds, 5760 items/hour (90 stacks/hour)

Materials Cost:
  • iron: 570 ingots
  • cobblestone: 128
  • planks: 960 (15S)
At 22 furnaces:

Rate: 7920 items/hour, ~124 stacks/hour

Materials Cost:
  • iron: 780 ingots
  • cobblestone: 176
  • planks:1296
Or if you _really_ want to go nuts, you go build the quad- 22-furnace array that mumbo built... 
 

Sunday, June 21, 2020

Rust, Minecraft, and the Fragility of Software

I've been using Rust for the last year or so as the main language I've been writing in.  Recently I went back to C++ for something else, and was struck at the difference.  I had been gradually liking Rust more and more as I used it, but switching back/forth between Rust and C++ really opens your eyes to the advantages that Rust has.

Rust's type system, and the way that it's woven references, heap-allocation, stack-allocation, and such all into the type system, is really powerful, and once you've gotten used to it, makes a great deal of sense, vs. the often opaque nature of C's pointers.

Yes, C++ has std::unique_ptr<> and, and has been trying to incorporate these concepts in the standard library, but it's not nearly as simple to use as Rust's default mode of moves and borrows.

In particular, the Rust compiler is a great ally.  And it's continually getting better.  The ability to catch "use after moved", and reference lifetime issues (e.g. use after free issues) is amazing.

But that, to me, is not the best part.

The best part is a standard library that has the notion of Option<T>, and Result<T, Error> deeply embedded into it.  Option<T> is an enum type, generic on type T.  It has two variants:  None, and Some(T).  None is like null, except that it's effectively a sub-type to the compiler.  You can't quite treat an Option<T> variable like it's a T.  Because it might be None.  But it's easy to use, especially with the match operator, map(), and the like.

Checked math exists, is A Thing, especially when dealing with time.  And that's subtle, but it's what got me thinking about this (that and explaining to my 9yo why computers have limits to the sizes of numbers that they can use).

Mathematical overflow is one of those things that we tend to not think about, except in the brief moments of choosing the type for a variable/member and then much later with the realization that something has gone terribly wrong when you're suddenly seeing nonsense values when something has overflowed.

Rust has a bunch of operations that are checked, and return an Option<T>, allowing it to return None instead of nonsense.  And since that None isn't a null, but an enum variant that you're forced to contend with, the compiler won't let you pretend it's Some<T>.

Unfortunately, that can lead to some clunky code when trying to do math (say converting Fahrenheit to Celsius), if each step is being checked for overflow.

But that clunkiness lays bare the fragility that underlies a lot of software.

We assume so much in the software that we right is safe, and for the most part it is.  Until it isn't.

Another example, and what got started me on this line of thought, was my 9yo asking about the Far Lands in Minecraft, a world generation bug that occurred at high values along X and Z coordinates (the ground plane).  And it occurred to me that this was likely due to overflows, or the imprecision of floating point at large values (which also shows up in Minecraft).

I've long been aware of these issues, but also as special cases.  By making some choices early on, one can mostly ignore them.  I mean, 640KB should be enough for anyone, right?

But these, and using Rust, has really been making me re-evaluate just how often we make these assumptions, and how fragile most software is, especially if it ever faces unexpected (by the developer) inputs.  And not just user inputs.  But corrupt I/O readings, packet errors, etc. can be nefarious in embedded work.

Rust certainly isn't perfect.  As I mentioned earlier, the checked math routines are clunky to use, and for the most part, aren't the default.  Tools like proptest exist, which can help setup the bounds for limiting bad inputs to your functions, but it's still a bunch of work to always be thinking about what these limits and error potentials mean.

But as compilers get better, especially with expressive type systems like Rust has, I'm hoping that we'll get to a point where we can catch these sorts of errors at compile-time, and as a result, get closer to a place where we can categorically remove classes of errors from programs.

Monday, June 1, 2020

Keyboard Modding: Spectrograms of Pinging Springs

After the previous round of mods, and sitting here in my (sometimes) quiet home office, I realized that I could hear a long fading-out "ping", or at least an "iiiiinnnng", that was clear after I finished typing.  This is often completely drowned out by music when I'm working though, so it wasn't that loud.

But once you start paying attention, and start making things quieter, it's a slippery slope.

This evening I dug out the spare switches (Kaihua Speed Bronze) for my keyboard, held one up my ear, and tapped it.  And heard the same thing ("tap-iiiiiing").  But not quite so loud as what I hear from the keyboard itself.

So I recorded it, and poked at some other switches that I'd lubricated last week, to see if they did the same.  And they did not...

But first, here's the spectrogram (from Audacity) of the stock switch :

The span on this is, for reference, is about 320ms.  So that long smear at 4800Hz (and the bands ~1400Hz above it are really interesting).  It looks like the fundamental might be around 1200-1400Hz (and there's a few spots of energy there during the impulse), and these are harmonics that are ringing.

Also visible is a similar pattern for 2, 4, 6, and 8kHz.  Probably lots of things going on.

Now, here's the spectrogram (with the same timespan) from similarly tapping the side of a switch where I've coated the ends of the main spring, the slider's sides, and the leaf-switch contacts, in Krytox 250g0 grease (but not the click bar):


The difference is pretty stark.  Both in sound to the ear ("tap", not "tap-iiiiiiiinng"), and it's visibly different in the spectrogram.

But then I looked back at the frequency plots from the up and downstrokes that I made:

Notice the sudden jump in level of the upstroke at 4800Hz, which comes back down at 6500kHz or so, and the similar bumps at the 2nd and 3rd harmonics of those?.

I think one of the reasons it's so audible when typing is that there are a lot of keyswitches which all experienced an impulse when any of the keys was pressed (especially since I tend to bottom out the keys).



That lack of ping is also apparent when you press and release the switch (held up to my ear).

Lubing the switches is on my mod-list.  But this might move it up in overall importance.  This is really quite the difference.  But as it's hard to reverse, I'm not doing it yet (I have a few others changes I want to test first).



A note:  These are Kaihua (aka Kaihl) "Speed Bronze", a switch with a very low actuation point, and is a clicky-type switch.  But the click mechanism is very different from that of a Cherry MX Blue.  Instead of an internal piece that moves/up down against the contacts, this has a "click-bar", which is separate from the tabs on the slider that closes the contacts.

So, 3 springs total.  But in this case, the grease is only on the main spring and the contacts, not the click-bar.  Greasing the click-bar makes the click much, much quieter, and slightly mushy feeling.


Sunday, May 31, 2020

Keyboard Modding: Stabilizer changes frequency analysis

As if modding stabilizers wasn't geeky enough, I went and took the recordings from that work and did a bunch more frequency analysis.  The images in the previous post were just spectrogram plots from Audacity.

I used Audacity to plot out both the spectrum (FFT) of the clips of hitting the spacebar, and plotted how the frequency content of the clips changed with each modification.


But of course, FFT results in a linear spacing of frequency bins, so the upper octaves are pretty hard to read, so I replotted in a linear scale to make that more clear:


Tuesday, May 26, 2020

Keyboard Modding: Stabilizers

My daily use keyboard is a GMMK TKL, with Kaihl Bronze switches (clicky), and SA profile keycaps from MaxKey.  It's a decent, inexpensive entry to the world of mechanical keyboards.  Mine is a "customized", with hot-swap sockets so that I can experiment with various key switches.



And while I like the feel of the clicky keys (and the Kaihl Bronze are very sharply clicky), it's a really noisy keyboard.  It's lightweight, and the stabilizers rattle, and if I'm being heavy on the spacebar, I can hear other springs in the switches vibrating (the switch to working from home really pointed out just how loud this thing was, all the little sounds it makes as I was typing).

And so I found myself ordering a bunch of new stabilizers, some lubricants, and while I was at it, a bunch more switches to play around with.

Sunday, March 22, 2020

Bufferbloat with Comcast gigabit with Arris SB8200

With the working from home due to COVID-19, I decided it was finally time to upgrade my service.  I moved up to the gigabit plan (mostly for the extra upload bandwidth), and that also required a new modem, so bought an Arris SB8200.  It's DOCSIS 3.1, but is apparently synch'ing with my CMTS via DOCSIS 3.0, so these tests are with it in that mode.

I ran a couple raw wired performance tests, and saw the expected ~940Mbps download, but upload was seesawing all over the place.  Per http://fast.com, download latency was negligible, but the upload was up well over 200ms.

So I moved behind my home router (WRT1900AC running OpenWRT), and ran a set of tests with https://flent.org.


Friday, December 23, 2016

Cake: the latest in sqm (QoS) schedulers

Today I finally had the opportunity to try out Cake, the new replacement for the combination of HTB+fq_codel that the bufferbloat project developed as part of CeroWRT's sqm-scripts package.

Background

The bufferbloat project is tackling overbloated systems in two ways:
  1. Removing the bloat everywhere that we can
  2. Moving bottlenecks to places where we can control the queues, and keep them from getting bloated
sqm-scripts, and now cake, are part of the latter.  They work by restricting the bandwidth that flows through an interface (ingress, egress, or both), and then carefully managing the queue so that it doesn't add any (or much) latency.

More details on how cake works can be read HERE.

The WNDR3800

Cake was meant to perform well on lower-end CPUs like those in home routers.  So the test results that follow are all on a Netgear WNDR3800.  This was a fairly high-end router, 5 years ago when it was new.  Now, it's dual 802.11n radios are falling behind the times, and it's 680MHz MIPS CPU is distinctly slow compared to the >1GHz multi-core ARM CPUs that are currently in many home routers.

All the tests that follow were taken using the same piece of hardware.

Final Results

I'm starting with the final results, and then we'll compare the various revisions of settings and software that led to this.

Comcast Service Speeds:
180Mbps download
12Mbps upload
100s of ms of latency

Cake's shaping limits (before the CPU is maxed out):
~135 Mbps download speed
12Mbps upload
no additional latency vs idle conditions



What's really impressive is how smooth the incoming streams are.  They really are doing well.  Upstream is also pretty good (although not great, this is the edge of what the CPU can manage).  But what's simply amazing is the latency graph.  It doesn't change between an idle or fully-in-use connection.



And the CDF plot really shows that.  There's no step between the idle and loaded operation, just a near-vertical line around the link latency (which is almost entirely between the modem and the head-end).