• Member Since 14th Feb, 2012
  • offline last seen 19 hours ago

horizon


Not a changeling.

More Blog Posts309

Aug
23rd
2014

Story feedback goes HITEC · 1:32am Aug 23rd, 2014

NOTE: HITEC is a legacy system, and I've standardized on its successor, HORSE. See this post for the HORSE story rating system.


So: While working on my feedback for this month's Minific Writeoff, I thought up a story assessment system that seems useful — and I'd like your feedback.

The thing is, every rating system I've seen is for the benefit of other readers, not authors. Rating a story "4/10" or "3 stars" or "Recommended" can give an author a sense of how well you liked it, but not how it could be improved. Typically, that's where the text of the review comes in, pointing out specific shortcomings.

However, talking about how a story could be improved is like falling down Alice's rabbit hole — there's a whole crazy wonderland of advice out there, through which you have to trace a tiny, inadequate path, ringed by literary swamps and headcanon battlefields and the huge, bleak wastelands of Missing The Point. Sometimes, the best advice you can quickly give is to zero in on a single problem and dissect it, ignoring everything else — which can be like a geologist pointing to some granite slabs and talking about earthquake faults while in the middle of a beautiful valley full of breathtaking flowers and sparkling rainbow waterfalls. (And, like Alice's rabbit hole, sometimes the metaphors that initially seem the most interesting are the ones that take an abrupt turn and lead you into a dead-end. Ahem.)

So, my challenge was: How can I give an overview of a story's relative strengths and weaknesses independent of review text? And, more importantly, how can I do that in a way independent of my enjoyment of the story, so that the feedback doesn't have to get tied up with authorial ego?

Enter: the HITEC scale.

The idea is to measure a story's strengths and weaknesses against each other. Every story gets 100 HITEC points, no matter how good or bad it was, to distribute between its five categories. The higher an individual category scores, the more solid the story's execution was in that category, relative to the other categories. (Think of a category's score as, "x% of my enjoyment of reading the story came from this category.")

So, for example, I might assign fandom ur-story "My Little Dashie" a H-I-T-E-C score of: 10-40-20-0-30

The five categories, in order:

Hook - First impressions. How much the title and first few paragraphs grabbed me. This may seem like a trivial thing to focus on, but when you're trying to pull readers in from the front page or featurebox, first impressions really count. (10: A relative weakness of the story. MLD starts with thirteen lengthy paragraphs describing the stultifying mundanity of the narrator's life — you're almost 1000 words in before he finds the box.)

Idea - How compelling and original the core premise is. This is NOT a measure of story quality — for instance, Obselescence's "Most Dangerous Game" contest challenged authors to make great stories out of the most stale premises possible, and there were some smashing stories in that — but original ideas make a story stand out. (40: The story's strongest feature. We may find MLD's premise cliché now, but that's because it pretty much spawned its own genre, and let's face it, 500,000+ views is as close to irrefutable as it gets as an argument for "this is a compelling idea".)

Technical - The quality of the English in the story, both small-scale and medium-scale. Spelling, grammar, punctuation. Colorful language, proper showing/telling, literary devices and metaphors, authorial voice. The category that's easiest for a proofreader to fix. (20: Neither strong nor weak relative to the story's other categories. Aside from comma overuse, MLD is solid enough. The language is not exciting, but nothing in it is poor enough to knock you out of the narrative.)

Execution - The strength with which the idea was sold. Great execution can redeem a cliché idea, and poor execution can turn a great idea disappointing. This is NOT about the quality of the language use — it's the punchline to Idea's setup. (0: The story's weakest feature. I've long been frustrated by MLD's failure of worldbuilding; this has been deconstructed in many places I don't feel like taking the time to google, so I'll leave this as an exercise to the reader.)

Consistency - How much it feels like the author was writing at the limit of their skill throughout. Low Consistency indicates that there were both brilliant and painful spots, and focused pinpoint editing will help smooth out the latter to dramatically increase the overall quality; high Consistency indicates that good stories are really good, and that poor stories have more systematic problems rather than areas needing touch-ups. In write-off competitions, low Consistency often comes from deadline pressures causing the ending to collapse like an undercooked soufflé. (30: A relative strength of the story. MLD is quite evenly presented. If you like it, there's very little to knock you out of that enjoyment, and if you dislike it, the things that you don't like will be equally bad all the way through.)

Make sense?

I need to emphasize: HITEC is normalized at 100 points per story. It is NOT a judgment of quality, and it doesn't "grade" a story; it's an at-a-glance view of what areas the author should address with editing/rewriting. Don't think that your scores make you "better" or "worse" than other stories with different HITEC scores.

The only thing a HITEC score can be usefully compared with is the SAME STORY'S scores in other categories. If you score Technical-40 and "Hamlet" scores Technical-30, it doesn't mean you're playing with language to put Shakespeare to shame; it just means that a proofreader should be the last thing on your mind when you're revising. If you score Consistency-0 and "The Eye Of Argon" scores Consistency-40, it doesn't mean your story is horrible beyond redemption; on the contrary, it means that there's both awful and great in your story, and your most urgent priority is to find the bad parts and make them as great as the great parts. (For an extreme example, the first five-or-so chapters of The Ambassador's Son are among the most magnificent stories on FIMFiction, and the next-to-last chapter literally made me want to print it out just so I could throw it across the room.) Focus your limited editing time on the areas with lowest scores first.

The best story in the universe would have a 20 in every category, because it would all be equally brilliant. A story that made my eyes weep blood would have a 20 in every category, because it would all be equally irredeemable. The things that we write will fall in between those extremes — and they'll be good and bad in different ways.

This is where you all come in.

Does the system make sense at a glance? Are there other things the scale should measure (in addition to, or instead of, its current categories)? If you got a HITEC assessment of your story, would it give useful direction to your editing process?

If you want to see the scale in action, I've posted HITEC ratings for all 52 of my reviews of August's Writeoff Association competition.

--

If you'd like to use HITEC yourself, here's a template you can copy and paste; just change the numbers:

[b][color=#ea80b0]H[/color][/b]-20 [b][color=#6aaadd]I[/color][/b]-20 [b][color=#a66ebe]T[/color][/b]-20 [b][color=#e6b91f]E[/color][/b]-20 [b][color=#e97135]C[/color][/b]-20

(Note: I've edited this post to reflect that HITEC is now 100 points split up between the categories. Originally, it was 10 points split up.)

Comments ( 39 )

Instead of numbers, wouldn't it be more intuitive to just rank the five qualities from best to worst?

horizon, stop being so smart. It makes me feel inadequate. Seriously though, I like the idea, but I have to ask how you would reliably objectify the subjective qualities of any given story. What I might give 2s to in each category might get a whole different set of numbers from someone else.

2392600
But then you don't get such a cool acronym!

2392600
Looking at my example, that might seem equivalent — but I ended up with a lot of scores like 3-3-1-2-1, and even a few 2-2-2-2-2s, and simply ordering them seems like it gives false information in those cases.

(Plus the acronym factor. :yay:)

2392629
Well, no system is totally objective, because we're individual sapients applying our own biases rather than members of some massive overmind, but that's probably worth thinking about.

On the other claw, if different reviewers give wildly different HITEC scores, then that might be a signal to the author that there's a lot of noise in the rankings — and to look at the things on which their critics all agree. We already do that with reviews; the numbers might just help in streamlining that process a bit.

PresentPerfect
Author Interviewer

the next-to-last chapter literally made me want to print it out just so I could throw it across the room.

I laughed. What happens in that chapter? I don't remember at all.

2392600
You need numbers for legibility. Unless you plan to color-code everything, you have to leave them in a specific order so their significance can be understood at a glance. This is more important when given feedback from a number of sources, as it's easy to line up and compare a number of columns to see if the same aspect was ranked the same (or close) across all critics/judges.

As to the system itself, horizon, I think it holds promise, but would like to see a few dry runs. The 52 pool sample you're proposing is a might large, IMO. A handful of stories (perhaps a half-dozen, tops) would be sufficient. You'd need to give a detailed breakdown as to how and why each rank was achieved so that those reading the review / evaluating the system have a clear understanding of how it works in action. Additionally, you'd also want to have a few (2-3) co-judges go through and do the same thing, so that we get a useful example of how it might work in large-scale judging when authors are fielding feedback from multiple sources. Keeping the sample size and judging panel small also allows you to potentially zero in on problems quicker. Once that's addressed, then see about scaling up to a full contest sample size.

Very cool idea, I like that there are few enough parts to keep each one distinct. I think this would be useful information as a reader picking stories to read, too.

It's a little hard to remember the categories, though. I think I would either need to keep a reference doc handy or get some reviewing automation built into the site.

2392670
That was when (Ambassador's Son spoilers!) the villain picked up an Idiot Ball just so he could die without the protagonist getting any blood on his hooves, after said protagonist promised to murder him; and then said protagonist (a ten-year-old child) launched into a spirited defense of slavery which everyone else simply rolled over and accepted.

It may not have been the second-to-last chapter exactly but it was right at the end.

2392639
Fair enough, though I honestly see the numbers as potentially getting in the way. If someone sees the numbers, there's a chance they might skip everything else (contrarily, there's a chance a reviewer might just give the numbers alone).

The numbers themselves might also get in the way for both reviewer, reader, and writer, as they try and associate the numbers with the contents of the review without necessarily paying thorough attention to the review itself.

Personally, I dislike the idea because of my own bias against associating rankings of any sort with written stories. I think it's also worth noting your system in its current form allows for stories that are inarguably terrible and justifiably amazing to have the same scores—if someone decides to look out for only these scores, then this doesn't help.

Or maybe I'm overthinking things like I'm prone to and showing off my typical pessimism. Don't get me wrong, I do like the idea you're shooting for, but I can't say confidently that it's one I can rely on.

Kj shit phknes dying.

HITEC: horizon is too endearingly cute.

PresentPerfect
Author Interviewer

2392697
Yeah, that story did have some uncomfortable threads. :/

Since I didn't actually comment on the meat of your journal, I want to say, I think this system could work, and would definitely be informative to authors beyond the standard "X out of Y" scales. That said, there would be a learning curve to understanding, and a sharp one for the reviewers actually implementing it. (I'm suggesting it would be hardest for reviewers, since it is such a unique system.)

2392681
Eh, well, it kinda stuck in my brain while I was writing up the reviews, and it was simple and quick enough to throw in (given that I was writing 200+ words of feedback already anyway) that I just went and did it. :twilightsheepish: Most of the reviews and HITECs are already done.

2392683
It does have some useful information for readers, yeah. For instance, if you know your tolerance for typos is low and a story scores low on Technical, you can predict it'll bug you and stay away. Or if someone's story scores high on Execution, even if it's otherwise rough, I might want to give it a read just to see what it did right. A happy side effect. :twilightsmile:

Personally, I would either like to see it out of 20, have half points, or give percentages out of 100. other than that, I love it.

which can be like a geologist pointing to some granite slabs and talking about earthquake faults while in the middle of a beautiful valley full of breathtaking flowers and sparkling rainbow waterfalls.

So Maud is reviewing stories now?

It seems like an interesting idea to me, but I'm also having a hard time seeing it as actually being more useful than something more traditional, like giving each of those categories their own 0-10 score.
But that might just be because I have not yet written and published any real stories here. Maybe I would feel differently if I was an author.
Actually, I realized as I was typing this that while this might, for all I know, be the greatest thing to happen to writing since the invention of ink, I don't think it will be very useful for readers at all. If explanations are given with each rating, they might help some. But even then, I'm not sure that any rating with this scale would make me more or less likely to read any given story. Unless there are a few zeros, I guess. But since this is obviously more for writers than readers, I doubt this is all that much of a problem.

2392722
On deeper consideration, percentages would both:

A) offer finer-grained distinctions, instead of nudging a point or two around
B) Reinforce the idea that this is about relative strengths of the story — "Technical 30%" seems to imply "this is 30% of the story's excellence" as opposed to "Technical 3" like it's a grade.

I think I like that.

You know, I've been thinking about getting into reviewing(sort of as a fun thing to do as I clear out my Read Later list) and this might be a system worth trying out. I like the break down of strengths versus weaknesses and using a percentage to quantify them is a nifty tool to try and work that out.

If I end up using it, I'll let you know.

There will, of course, be bugs to iron out, but I like this idea. I'm quite interested in seeing your "trial" stats. I think I'll defer on making suggestions until then, though, since I'm not sure if I'll have anything useful to recommend before seeing how it works as-is.

2392747 also, it took me until half an hour ago to realize the subtle Calvin and Hobbes nod.

I'm not sure about the colors. :twistnerd:

I would say "maybe white for Idea and independently maybe cyan for Execution", but then you get white on white if there's a white background. Curses! (Yes, I know the white could also be bluish, but if you're going to go that route then the orange and such make no sense for symmetry. Gray gives the wrong impression, and violet would collide with the other violet, which could be pink but then you have a chain reaction of collisions of colors which makes me think of Color Collision from the Microsoft Entertainment Pack and :pinkiecrazy:

*cough*

)

I chose a case study, the long story I most recently finished, and it revealed one of the ways this technique is much more useful to writers than readers: the story scores low on consistency because the prose improves so much over the year-and-a-half the author spent writing it and, presumably, honing their craft.
(It also reveals the limitation that no author is going to care about the score on such a long story; "rewrite the first half or so" is not such a helpful request on a 157 kiloword story.)

2393398
That's good feedback, especially since I was coming up with this in the context of a minific competition and hadn't thought of applying it to longer works yet (it would be most useful if given in a situation where it can lead to editing changes).

Actually, as a potential reader, that's great information for me as I approach it: low Consistency = "it starts rough and gets better." (That highlights how the numbers would be most useful in concert with at least some small explanation.)

If you don't mind, what would be the full set of scores you'd give it and why? (And if the rest of y'all want to play around with it to get a sense for the system, feel free to HITEC-rate one of my own stories — I'm assuming that since we're here at my blog, we're all more likely to have read one of those.)

2393417 2-3-1-3-1, I guess; I don't really remember the intro well by now but it was obviously good enough to get me hooked on the story as a whole. I kind of want to score Execution higher than Ideas, because the (somewhat cliché) premise is a good one but not as notable as the (really inventive combination of existing cliché premises) climax/reveal/etc, although then again I gets dinged because the survival stuff isn't always a particularly interesting example of its kind before/outside of the big reveals, and it's the survival stuff that drew me in in the first place... maybe if I was an actual author/prereader and knew what the hell really mattered here that would help.
And low C isn't necessarily a helpful sign; for example, The Immortal Game shares with Stranger in a Strange Land the reverse of the improve-as-you-go tendency, where they start out amazing and gradually get worse. (By way of softening my criticism, I'll point out that Game is actually better than Land on this particular axis.)
On that note, should we assume C is purely about technical/prose quality? Some readers might find genre shifts or other changes that can feel inconsistent to be a turn-off (although personally I love them when done well).

It's an interesting idea, but the way it is presented feels like it would be remarkably easy to either do incorrectly, or be interpreted incorrectly. It needs polishing I think.

Which means beta testing time! :rainbowdetermined2:

Wow, I think this idea is brilliant, and I'm surprised nobody thought of implementing it before :pinkiesmile:

And now that I think about it, it could be combined with the traditional 5-point scale for the reader's benefit without losing any information-simply multiply each HITEC score by the grade it had gotten.

EDIT: I just figured out that wouldn't actually work. Nevermind.

For some reason, I'm kinda irked that HITEC isn't an actual word, but I suppose new words have to come from somewhere... :rainbowlaugh:

And I must say I can't wait to hear your opinions of the fics in the writeoff; the current crop was interesting, and varied wildly in quality, which I'd say is a good thing

A fascinating idea. My gut reaction is that just the numbers would be almost useless, since it tells the author almost nothing about why the story scored what it did... but isn't that the case for any rating system? I'm honestly having a little trouble wrapping my mind around this because of how different it is. Measuring relative quality instead of absolute is a surprisingly large paradigm shift.

In short, I love this idea, and I look forward to seeing it in action in your minific reviews.

This seems like a novel and useful idea, but I think a lot of people might not get the 'distribute ten points among categories' thing, everyone is just too used to a 1-10 system. I'd be interested in trying to get it implemented though. The only problem I see is one you pointed out: the best story ever written and the worst story ever conceived would both score 2-2-2-2-2.

Distributing stats for my stories
Can't I just roll 4d6 instead?
Sorry, sorry. :D
It seems like an interesting system. I'm looking forward to your reviews!

2393815
:flutterrage: NO 4d6 FOR YOU. THIS CAMPAIGN USES 6x 3d6, IN ORDER, NO TAKEBACKS, MISTER.

But you can borrow my "special" dice, as long as you don't tell the other players

2393781 2393659
Yeah, as pointed out, this is geared toward providing editing feedback to authors rather than level-of-quality feedback to readers — it certainly could be combined with a more traditional rating scale, if you were trying to do both. Multiplying them together gets you garbage, but "this is how much I liked it and this is why" as two distinct items could be useful.

As pointed out, the closer to uniform the scores are, the less useful the HITEC rating is — those 2-2-2-2-2s for the best and worst stories in the world both mean "I have no useful feedback on how this could be better edited" (fix everything and/or fix nothing). If stories at medium levels of quality are getting even scores, that probably illuminates a fault in the system — that there's a different type of writing problem that it's not able to address.

2394316
i'm so glad i've never had to play like that

2393659
I pronounce it like "high-tech". The acronym may or may not have been shuffled around accordingly. :trollestia:

2393524
What parts are most confusing?

2393460

should we assume C is purely about technical/prose quality?

Absolutely not. C is about the difference in enjoyability between the best parts and the worst parts — whether it's frustrating and enjoyment-killing at the points when the story stumbles, or whether you reach a glitch and notice it and don't care.

This applies whether it's typos/grammar/tense changes, or a character idiot-balling, or insane worldbuilding, or any of a thousand other writing faults, that breaks you out of the narrative.

2394381
AHAH! That is mildly brilliant. Kudos :pinkiehappy:

2394381
The way that the numbers relate to each other. Really, it comes down to the way the system specifically is meant to judge aspects of the story as they relate to other aspects of the story. But people are largely used to rating stories against other stories. This gives the numbers context. 5 stars for instance, indicates to people that this story is good, because other 5 star stories are good. (That's the idea anyway.) This system lacks the same kind of context, and it will almost certainly result in some confusion, at least initially. People would expect the numbers to be based on some kind of outside scale, whereas the system here is explicitly internal.
Now, don't get me wrong, I think this is a brilliant idea, and one that addresses real problems in critiquing something, but it runs a bit counter-intuitively to how people are used to thinking when rating something. But that just means it's all the more important to get it out there. If it gets used enough, then naturally people will begin thinking in the terms it needs to.

2392747

If you go for percentages, I would change the total to 200%, or perhaps even 250%, so as to get higher numbers in individual categories, potentially close to 100% for an imbalanced story; I think it would be more pleasant to look at this way, rather than seeing a bunch of low scores in every category.

250% to have the average score be 50%.

I really like the idea, but I am not sure it's an advantage to have a normalized vector rather than a vector. That is, if you ranked each component relative to the universe of other stories, they should still have the same rankings relative to each other. It seems the advantage is in making judging easier, and making being judged less painful.

A cool idea, but my gut agrees with Bad Horse above, that a rating would be more useful if it could be applied relative to other stories.

I am also not certain that I would use the same components in ranking stories that you have. I will have to think about it.

At any rate, I found the HITEC score you gave my write-off story useful, but only in the areas in which I didn't score 20. So I guess as an aide to focusing editing concepts, it works.

I like the general idea of this, kinda reminds me of the skill set circle graphs, but I think its too difficult to get info out of them alone. With it set up like this, where 2-2-2-2-2 can be an equally good or equally bad story, then it requires a review to get any info. Maybe have it go from 0-5 for each category, as related to the story itself. So a 0 would mean that that element weighed down on the story despite any other good elements, and a 5 being it brought the story up despite any bad elements. Though I think you will have to change consistency to something else since it wouldn't really work when its no longer in percentages. So an author can look at the final tally of something like 1-4-3-2-4 and instantly realize that they:

H: Failed to get the reader interested with the title/first few paragraphs

I:Had a interesting idea, that helped elevate the story

T:Had sporadic grammar/punctuation/spelling errors that, while noticeable, didn't diminish too much from the story.

E: Poorly executed the idea, diminishing the overall story.

C: What ever C is, they did it well.

This makes the score independent of the review, yet also complements the review. It also means that authors won't be staring at a 2-2-2-2-2 and wondering if they are good or bad at writing. Now they will no its because they are bad :D

And :pinkiegasp: you could show these using Prose Radar, just like the Groove Radar from Dance Dance Revolution! *poof* *imagination bubble* Hook is Air, 'cause you have to jump up and hit the reader with several things at once to smack them out of their silly little reality and drag them in with you, Technical is Stream, 'cause the words and sentences and stuff have to follow each other in the right order, Execution is Freeze, 'cause you have to actually hold on to each of the things you put in and not let up on following through, Consistency is Voltage, 'cause when everything suddenly pulls you this way and that way is right when you can't suddenly get tired and drop all the words on the floor or else your reader's Immersion Bar will go *weeoooo* down to zero and the shark will make them jump, and Idea is the chocolate—milky—goodness—of—Chaos!

Doesn't that sound awesome, everypony? :pinkiehappy:

I like it. (The idea of, anyways. Could always be problems in execution. For example:)

I second needing the reference doc.

You slipped up and only gave MLD 9 points. That might cause trouble. (I haven't seen anyone else mention it yet, so I am.)

I think running a bunch of "most dangerous game" stories through HITEC would also be a useful demo. Probably doesn't have to be all of them, only the "top" N or so.

2392713
I see it as in addition to, and on a different scale than, a typical recommend/don't-recommend (upvote/downvote) scale. Some examples:

Downvote, H I T E C 3-3-1-2-1
A wonderful idea brought down by bad writing.

Upvote, H I T E C 3-0-1-3-3
A genuinely blah idea smashingly executed, although I couldn't always tell if the malapropisms were intentional.

Upvote, H I T E C 2-3-2-2-1
Sure, it drags in places, but I've never seen something like this.

Downvote, H I T E C 3-2-1-2-2
It's a great lead-in, but I'd honestly rather be reading something else.

Upvote, H I T E C 0-3-2-3-2
A brilliant exploration. Skip the first four chapters, they're recapped anyways.

Horizon, am I doing it right?

Also, doing these fake mini-reviews, I notice a maybe flaw: "Consistency" feels weird relative to the others. The other four feel like "higher is always better", but in consistency it seems higher could mean "this is really good" or "this is really bad".

It could just be me looking at it the wrong way. Maybe low consistency is "this is rocky and uneven" and high consistency is "this felt equally well done throughout"? Or to put it another way:

Downvote, H I T E C 0-4-0-0-6
I tried really hard to like this. I kept looking for it to get better.
It didn't.

?

2398116
I can't decide if that's a useful idea. On the one hand, it makes scoring way simpler. It would appear to make interpretation simpler too, but I'm withholding judgement on that--I feel it's very easy to get false positive there. On the other hand, relative amounts make it easy to see where you dropped the ball the worst.

I'm also tempted to try to code up some sliders to make giving a score easier.

2415701
You're spot on about Consistency, and I can't quite decide whether that's a bug or feature. It is, it must be said, useful to have a score as a "counterweight" that doesn't work quite like the others, so there's something to soak spare points from low scores or to borrow from for high scores; it cuts some of the sting from normalization.

Dashie got ten points. (1 + 4 + 2 + 0 + 3)

General acknowledgement of the rest (plus the comments above it); just trying to clear a few other things off my plate and let this simmer on a back burner before returning to it. I think the main thing I need to introspect on at this point is whether to stick to my guns on keeping the scores normalized, or to give in to what people are suggesting and make scores relative to absolute quality. Both have tradeoffs.

What will definitely be changing in the next version is making it out of 100 rather than 10; that offers a little finer grain in the scoring and (if I keep it normalized) makes the numbers into percentages.

2425989
(maybe I'm repeating myself, but:)
The quirk with Consistency is it doesn't scale the way the others do. With the others, higher is better, relative to itself. With Consistency, higher just means smother, with no indication of whether that's relatively a good thing or relatively a bad thing. (For example, a consistently awful downhill spiral.)

(A thought: Maybe high Consistency means the author was (more-)constantly writing/editing near but within the limits of their ability, while a low Consistency means they were shy of or attempted to exceed or something. Not "that's what it's aimed at" so much as "that's a thing it ends up measuring".)

Darn it, she did. I can't believe I missed that, repeatedly. I'm going to arbitrarily and groundlessly blame the color.
:applejackunsure:

I think, in order to remain useful as an author informational tool, stories have to remain scored out of some n points and relative to themselves. If you make them relative to the universe of stories, you end up with the by-definition-impossible task of choosing reference stories. If you use 2398116's idea, and make each score absolute but interpret them relative to the story, you end up with a massive temptation to shoot for a max-max-max-max-max story, and many people will still end up trying to treat them relative to the universe of stories.

They will anyways, but at least with story-relative scores you have to approach like "oh, this story has a mediocre hook and a dearth of new ideas, and I tend not to like those, but this one has a so-so hook and strong new ideas, so I'll check it out first".

I'm envisioning a web tool with five sliders, one for each category, and dragging one of them up drags the others down. That'd work well with large point sets like 100 or 250. I'm just not sure I know how to start it.


Incidentally: I was talking with my mom about a bad story she was in the midst of, and the trio "interweave - intrigue - information" came up. That made me think of this, it might be another useful story aid.

Login or register to comment