First off, I think bouldering is too nuanced to ever be perfectly scored. The new system obviously has its criticisms, but most other methods fall down in certain cases too.* All in all, I think the new system has some real merits. Here's my stab at giving a better explanation of and a rationale for this new method of scoring.

Imagine a competition series with 4 different events spread out over time. After each event you can rank the athletes by their performance (assuming some scoring system for the event). If you want to crown a winner for the whole series, you need a way to combine the per-event rankings. A simple way to do this is to combine the ranking each person got in the event (i.e., 1st, 2nd). If there's a tie, find some way to assign points (e.g., average the places where the tie occurred -- this is the scheme ABS used). Now the simplest way to do this combination is to average the rankings from each event to get an overall ranking. The person with the lowest average ranking wins.

Now imagine that each event had only one boulder problem in it. Guess what, you now have the scoring system used in this year's ABS nationals with 2 caveats.

1) The total number of tops trumps your ranking (i.e., average ranking is used just to break ties in the total tops score).

2) Instead of taking the arithmetic mean (average) of the rankings, the geometric mean is used. That's all there is to the

*n*th root of the product of rankings. It's just a different way to compute an average. The geometric mean has the advantage that is suppresses outliers (i.e., doing poorly on one problem has less effect on your overall ranking than with the arithmetic mean). See figure and description below.
The individual boulder problem rankings were determined using a system that we're all pretty familiar with. Namely, highest hold achieved is most important and ties are broken by number of attempts to get that high point. If two people are still tied, then they just split the ranking points. See the full description of the scoring method here.

In my opinion, this new scoring system is completely reasonable and may prove to produce more consistent results than previous methods. It just didn't fare too well in its first showing. I agree that it's a bit complex for the viewer to follow, but none of the other methods are that easy to follow either unless you've been watching comps for years. If the scores are constantly displayed for the viewers, this is less of a problem. I do concede that it's a bit strange that with this new method, the relative ranking of two individuals who have finished climbing can be changed by some climber later in the round. For me, that's not really a big deal though... you just have to wait till the end of the round to know the results. There's a lot more to go into here, but one thing I like about this new method is that it strives to assess the difficulty of each problem in a way that is relative to all the climbers.

As a side note, here's a quick analysis that compares the arithmetic and geometric means for a set of 4 rankings. All 4 rankings have the same arithmetic mean (3), but they differ in their geometric mean. I've order them to be ascending in geometric mean and I'd argue that the ordering is roughly consistent with what subjectively seems to be better performance (i.e., ranking 1,1,1,9 is better than 3,3,3,3 in my opinion).

*For example, points per hold methods aren't good when a problem has a crux down low and then lots of easier moves because someone who gets through that move racks up lots of points. Compare that to a problem with easier moves at the start then one hard move at the end. Ignoring the affect of getting a top, the climber who does the hard move at the top of problem 2 doesn't get rewarded nearly as much as the person who gets through the hard move on problem 1.