Monday, August 3, 2015

Sharpness Comparison: Nikon DX 18-200 vs. FX 24-120 f/4

A friend of mine and I have each owned the Nikon 18-200 DX zoom.  Which, for being an 11x zoom, is a pretty impressive lens.  But we both felt that it never delivered what we really wanted out at 200mm.

Especially as we moved up to the FX bodies (he's on a D750, I'm on a D600), and started shooting with prime lenses (the Nikon 50/1.8G and the Tokina 100/2.8 are my main lenses, both are stupendously sharp, even wide open).

In comparison to the primes, shooting with the 18-200 was just disappointing.  Obviously, an 11x superzoom has to make a lot of compromises, and the test data backs that up (DxOMarkdpreview, photographylife.com).

My friend pointed me at the 24-120/4, and he's picked up a copy (kitted with the D750), and I rented one from +BorrowLenses.com over this last weekend.  It's a fantastic walkabout lens, and is super sharp for what it is.  It's not the 28-70/2.8  But then, it costs 2/3 as much, and zooms nearly twice as far in trade for the loss from f/2.8 to f/4.0.

What I noticed the day I picked it up is that at 120mm, it felt sharper than the 200mm.  Maybe it's better VR, the much shorter length reducing camera shake (I have horrible hand-shake for a photographer).

And so after shooting with it for the last 6 days, and loving it, I decided that I needed to shoot some benchmark photos in controlled circumstances, with the same subject, in a short timeframe.

I don't have a snazzy photo target and lab space for this, so I reparked my motorcycle and used it's fairing graphics and bolts as my sharpness indicators.

The first test was to setup the tripod and shoot the bike from a distance, and simulate a situation where I'm short on lens, and going to be cropping down to a subject from too far away.  This is my "zoo" and "motorsports" scenario.

24-120 @ FX 120mm f/6.3 1/125sec ISO110
18-200 @ FX 200mm f/6.3 1/200 ISO400
 I think that in this case, the 18-200 edges out the 24-120, but you need to be looking at a fairly large image to tell that (the thumbnails on the screen are equivalent to me).  Pixel-peeping gives the edge to the superzoom, but not by a huge margin.

And the advantage comes in when you scale the larger 200mm image down to the same pixel dimensions as the 120mm (after cropping down to the same subject):

24-120 @ 120mm, cropped
18-200 @ 200mm, cropped and scaled down to same pixel dimensions as the 120mm
But again, you need to be looking much closer than this to really see the difference:

24-120 @ 120mm
18-200 @ 200mm
 In this case, the 18-200 clearly edges out the 24-120, but it took a bunch of zooming in to make it apparent.

And even then, I'm not sure that I care that much, as at normal viewing sizes, they're pretty comparable.


But then I realized that there's another use-case, one where you're not stuck trying to photograph something with what's really too short of a lens for what you want.  If you can put the whole image to use, and therefore get closer or farther from the subject, so that when the picture is taken, it's filling the frame, what can you get?  (ie, perform the same composition through the viewfinder, and just use that).

24-120 @ 120mm
18-200 @ 200mm (FX)
18-200 @ 200mm (DX, for 300mm equivalent)
Here, there's about 50% more lines of resolution in the two FX-composed photos (as is expected), and they make a huge difference (although again, you need to be viewing at full screen to see it).

After scaling them all down to the same pixel dimensions, and zooming in, however, the different is stark, and the 24-120 handily out-does the 18-200.

24-120 @ 120mm
18-200 @ 200mm (FX)
18-200 @ 200 (DX for 300mm equivalent)
However, it's also _much_ closer (10m @120 vs 16m @ 200 vs 25m @ 300 for the same field of view).  So perhaps that's just cheating...

But, overall?  I'm definitely going to upgrade to the 24-120.  It's far better than the 18-200 over their overlapping areas that are equivalent (18-80mm on the DX lens).  The measurements are that you have about 50% more resolution on the FX zoom than on the DX zoom.  For the range that the 24-120 doesn't cover, but needs to be cropped down for, it's close enough.

And that's before taking into consideration the constant f/4.0 aperture.  The 18-200 is only close to that at 18mm, and there it has far worse vignetting in the corners, and can't be used at FX, and it very quickly drops to only f/6.3.



The 24-120 is going to give me an FX lens that covers the entire range that the 18-200 did, and almost always do a better job, and only when I've reached a point where I should be using a 300+mm lens am I really going to be giving anything up.  And in those cases, I should just be using a big lens, anyway.  At least for me, I just don't need that long of a lens very often.  And when I do, I know it well enough in advance that I can go rent the exceptionally sharp 70-200/2.8 or 300/4 for about $120/week.

Tuesday, June 2, 2015

HTB rate limiting not quite lining up

I've noticed this off/on with first the WNDR3800, and now the WRT1900AC.  The rates I enter for the sqm_scripts aren't being met, and not, I think, because of CPU load issues, but something about the book-keeping.

Here's a set of tcp_download tests on the WNDR3800, the ingress rate limits are in the legend:

The WNDR holds up linearly until 90Mbps, and then it's clear that everything's come apart.  With the measured "good-put" at an eyeballed 95% of the rate that's setup in the limiter.  This is likely to be expected TCP overhead vs. the raw line bit-rate (which is where the limiter is running).

However, on the WRT1900AC, it's off rather significantly:


Maybe 80% of the target ingress rate?

+Dave Taht suggested I turn off TCP offloads, and it got less linear, worse on the low-end, better on the higher end.


This is definitely going to take some more testing (and longer test runs) to map out what the issue(s) might be.

**

Corrections:  This post previously stated that the WNDR3800 was falling short, but after talking with some other people, I think that's likely just the expected overhead of TCP, which becomes a more-obvious difference between the raw line rate and the "good-put" as bandwidths go up (5Mbps is easier to see than 500Kbps).

sqm_scripts: before and after at 160Mbps+


Apparently I've been upgraded.  I did a baseline test today (with sqm off), and saw that the new download rate was up around 160-175Mbps from 120-130.  That's some very impressive over-provisioning from Comcast.

Unfortunately, it also includes some rather unfortunate bufferbloat.  That's a surprising change for the worse, as the service, when initially installed with the same modem, was actually quite good by "retail" standards.  But still awful vs. what it should be.

The ugly (but fast):



Classic bufferbloat.  At idle, the target endpoint is maybe 10-12ms away.  200+ms of latency is pretty awful, and drags the "effective" performance of the service from >150Mbps down to what "feels" like a couple Mbps.

After upping the limits in the sqm, and turning off the tcp stack offloads, I ended up with this:



So, total bandwidth available has dropped to about 140-150Mbps (still more than the 120Mbps the service claims to be).  But latency is basically gone.  fq_codel holds the 5ms target rather nicely.

To make that latency difference more apparent:


Settings:
200Mbps ingress limit (something is odd with the math on this, clearly)
12Mbps egress limit
ethtool -k eth1 tso off gso off gro off

TCP Offloads: more harm than good

+Dave Taht has been saying for a while that TCP offloads do more harm than good, especially when mixed with fq_codel, and the ingress rate limiter that the sqm_scripts package uses to replace the large inbound buffers in the modem and CMTS with a much smaller buffer (but nearly as fast bandwidth), under the control of the router.

I finally put some numbers on that tonight.


The first dataset (green plots) are without gro, tso, and gso.  The second plots are with those offloads all re-enabled.  So enabling offloads:

1) slows it down
2) increases latency

??

Yeah, I'm keeping all the offloads turned off (and adjusted my router startup scripts to keep them off each time the simple.qos script runs).

Saturday, May 23, 2015

sqm-scripts on Linksys WRT1900AC (part 1)

more actual context later, for now, I just wanted to make a quick post with some numbers from tonights tests.


  • Comcast Blast! (120Mbps/12Mbps) internet service
  • Arris AB6141 modem
  • Linksys WRT1900AC router
Stock firmware:

full speed, but a fair bit of latency (although honestly, 120ms is pretty good compared to most stock setups)




After:





And what's even better?  Wireless is pretty awesome, too:


Tuesday, February 3, 2015

Thoughts on Iot: Products and Novelties

To be successful, you need a product

The Internet Refrigerator is a meme that gets a lot of jeers, but I think at the core of those jeers is the fact that it seemed like a solution looking for a problem.  To me, most of IoT feels like this (especially most of the stuff at CES).

Early devices like the toaster were done because they could be done.  Dares and fun side projects.  But they weren't real products that solved someone else's problems or filled a need.

I've seen a lot of kickstarters and startup companies go after home automation without really having a product or service that solved a real problem.  And those have mostly faded away.  The compelling reasons to use the products aren't there, leaving them as no more than novelties.

This is, perhaps, the genius of the Nest.  It's a thermostat (boring), made to look beautiful (yes, a novelty), but with remote access, and the smarts to learn your schedule instead of you telling it your schedule.  The goal is clear, the execution is beautiful, and now that they've had time to refine the results, everyone I know with one loves it.

I have an internet-enabled bathroom scale from Withings.  Yes, really.  What it offers is that it remembers my weight, every time I step on it.  And it gives me that data later, graphed over time.  And it does the same for my heart rate and some other health-related data.  The problem it solves is that I hate data entry, which is why I never was very good at tracking my weight in the past.  Now it's tracked for me, automatically.  And now I have a handy reference for some of my vital stats.

These are useful products, even if still overly expensive and luxury items.  But the electronics are just going to keep getting cheaper.  But if the business proposition doesn't fill a need, just is "X plus the internet", I don't see it ever really being more than a novelty today, and tomorrow's humorous internet meme.

Monday, February 2, 2015

Thoughts on IoT: Introduction and Hurdles

The Internet of Things is about connecting devices.  To what?  To everything.  It's not about making toasters or refrigerators that are "internet-enabled", it's about connecting devices to everything else.  And the internet is the doorway to "everything else".

Nearly a decade ago, when I entered this space, I hadn't realized quite what it was going to turn into.  Since then, I've seen a bunch of ideas come and go (and some come back again).  The core problems really haven't changed, and the solutions are getting better every year.

This is the first in a series of posts about the Internet of Things: lessons I've learned from it, the hurdles for anyone in the space, problems that I think we need to solve, and where I'd like to see things go in the future.

These aren't likely to be in any particular order, and the list below isn't in one, either.  Of these, most of them are the hurdles I mentioned in the title.  These are the problems that need to be solved in IoT.




Successful products and services vs. novelties - Most IoT devices seem like novelties, and that's a fair argument to make.  I think we're still in a phase where we're still exploring what we can do, and trying to build solutions to real problems.

Communication within the home - This is mostly about engineering the solution, but also about standards and communications.  This is a really hard problem to solve neatly, in a way that works in the "real world".

Communication from the device to the cloud - Talking out can be easy (if you have TCP/IP), but trying to communicate back to the devices gets a lot more complicated in a hurry, given the current state of home networking.

Security
- Security is always hard, and IoT marries some things that expose new problems:  embedded device security, network security, and data security (let alone issues of data privacy)

Data - As we connect more devices to the internet, and start trying to do things with their data, we're going to need to deal with that data:  ownership, storage, search, privacy, analytics, automation, etc.

Interoperability - It's not really an internet of things if the things can't talk with each other (or their clouds can't).  The Hue is a very expensive, smartphone-controlled lightbulb by itself.  But it could be 
so much more if other things could talk to it.



    I don't see any one company solving all of these, or any one consortium, either.  But as an industry, as builders of the internet and the devices that we connect to it, we'll need to be solving these issues for the Internet of Things to really bloom in it's full potential.