Recently,
Amazon released lists of its 100 top selling titles for Kindle ebooks. In some previous blogs, we looked at how
Indies (self or independently published ebooks) compared to Trads (traditionally
published or trade published ebooks) in terms of books sold (as measured by
ranking within the top 100), reader satisfaction (as measured by Amazon
reviews) and imputed revenues/earnings. We
also looked at how well sales could be estimated from reviews, via several
lines of evidence. (For more information
regarding the findings from those earlier analyses, see the previous blogs Amazon
Top 100 Kindle Books - Indies versus Trads Part 1, Amazon Top 100 Kindle Books
- Indies versus Trads Part 2, Amazon Top 100 Kindle Books - Indies versus Trads
Part 3, and Amazon Top 100 Kindle Books - Relationship between Sales Rank and
Number of Reviews.)
Note: Since
this blog is a bit lengthy, I will put an “executive summary” at the front,
repeating the summary remarks made at the conclusion of the blog. So, to sum up (prematurely):
·
The
top best-sellers get fewer reviews than expected, based on sales rank. Books further down the list got more reviews
than one would expect from their sales ranking.
·
Male
writers were slightly more likely to be reviewed, relative to sales, but the
effect was small.
·
Indies
were slightly more likely to be reviewed than Trads, relative to sales, but
again the effect was small.
·
Higher
priced books were reviewed more often than would be expected from sales rank.
The reverse was true for lower priced books.
·
Non-fiction
was reviewed more often than would be expected from sales rank, but the numbers
of such books were too small to be able to say whether this was meaningful.
·
Romance
and Thrillers were not reviewed as much as would be expected from sales
rank. Science Fiction and Fantasy were
“over-reviewed” as were the other categories.
Now for the detailed blog
In this
fifth blog we continue to looks at sales rank vs number of reviews rank, in
order to explore something we might call reader’s public engagement. That won’t be a measure of sales or of
number of reviews, but rather a blend of the two. Our intent is to discover what factors might
correlate with readers’ tendency tell the world about the book they just read. Note that this measure of engagement could be
either positive or negative – it is the willingness to do a review that matters
more than the rating given to the book that is important here.
To this end,
we will compare a books sales rank to its number of reviews rank, in the Amazon
Top 100 list. So, for example if a book
was in the ranked 15th in sales but 85th in number of
reviews, we would conclude that readers had low levels of engagement with the
book – at least in the sense of engaging in a public space such as the Amazon
reviews system. They may have liked it,
but they evidently weren’t motivated enough to review it. Conversely, a book that was 85th
in sales rank, but 15th number of reviews rank would have high
engagement. People who read it were
evidently more motivated than average to write a review. Lastly, a book whose sales rank and number of
reviews rank were the same or nearly so (say ranked 50th in both)
would be considered to have average engagement.
So, given the limitation of our data, what we are measuring is relative
engagement – does the book receive more or fewer reviews than would be expected
from its sales rank.
Please note
that the number of reviews ranking relates to ranking within that list, not the
entire set of Amazon books. But we don’t
have access to those numbers, so we work with the data that we have, rather
than the data that we wish we had.
So, let’s
look at some data. First, we will look
at how engagement varied by sales rank, grouping the data by decile (i.e. into
10 equal sized categories). To that end,
the table below indicates that books with high sales rankings didn’t tend to
receive as many reviews as we would expect ,while those with lower sales
ranking received more reviews than we would expect. For example, none of the books in the second
decile of sales (ranked 11th to 20th) were ranked in the
second decile of reviews – they were all ranked somewhere further down the list.
On the table, that is indicated by the column headed “3-RevRan<SalesRank”,
which shows that all ten books in this decile had review ranks less than their
sales ranks. This is further reinforced
by the column “Sales Rank minus Rev Rank”, which shows that books in this
decile were rated 20 places higher in sales than in reviews, on average. Conversely, only one of the books ranked in
the 9th sales decile (81st to 90th rank) was
in that review decile – most had review rankings higher up the list. On average, they were rated 16 positions
higher in their review rank than their sales rank.
Rank2
|
1-RevRank > SalesRank
|
2-RevRank = SalesRank
|
3-RevRank < SalesRank
|
Sales Rank minus Rev Rank
|
1
|
2
|
2
|
6
|
3.0
|
2
|
10
|
20.8
|
||
3
|
4
|
1
|
5
|
11.3
|
4
|
4
|
6
|
10.6
|
|
5
|
5
|
5
|
5.2
|
|
6
|
2
|
8
|
6.0
|
|
7
|
6
|
4
|
-7.5
|
|
8
|
7
|
3
|
-16.4
|
|
9
|
9
|
1
|
-15.9
|
|
10
|
9
|
1
|
-17.4
|
Next, we
will look at the writer’s gender. The
table below shows that there doesn’t seem to be a very pronounced gender
effect, though males do seem to be somewhat more likely to be reviewed than
females, relative to their sales rank.
On average they are 9 ranks higher in reviews than sales, while females
are 4 ranks lower in reviews than sales.
Note that the numbers above don’t balance – that’s because there are more
women writers than men in the Amazon Top 100 list.
WriterSex
|
1-RevRank > SalesRank
|
2-RevRank = SalesRank
|
3-RevRank < SalesRank
|
Sales Rank minus Rev Rank
|
Female
|
33
|
1
|
36
|
3.8
|
Male
|
15
|
2
|
13
|
-9.0
|
Pub3
|
1-RevRank > SalesRank
|
2-RevRank = SalesRank
|
3-RevRank < SalesRank
|
Sales Rank minus Rev Rank
|
Indie
|
12
|
|
12
|
-4.8
|
Trad
|
36
|
3
|
37
|
1.5
|
Publisher2
|
1-RevRank > SalesRank
|
2-RevRank = SalesRank
|
3-RevRank < SalesRank
|
Sales Rank minus Rev Rank
|
Doubleday
|
|
|
1
|
6.0
|
Hachette
|
8
|
11
|
6.7
|
|
Harlequin
|
1
|
1
|
6.5
|
|
Harper
Collins
|
3
|
1
|
-18.8
|
|
Indie
|
12
|
12
|
-4.8
|
|
MacMillan
|
1
|
|
-28.0
|
|
Penguin
|
7
|
1
|
12
|
4.0
|
Random
House
|
7
|
2
|
6
|
2.9
|
Simon
& Schuster
|
8
|
5
|
-2.1
|
|
William
Morrow
|
1
|
|
|
-27.0
|
Next up is
price range, broken out as low (under $4.00), moderate ($4.00-$7.99) and high
($8.00 and over). There does seem to be
an interesting trend here – people appear to be more willing to review higher
priced books than lower priced books.
Lower priced books sales ranks tended to be about 8 places higher than
their review ranks. For higher priced
books, the opposite was true, and moderately priced books had sales ranks and
review ranks that were almost identical, on average. So, there may be some sort of social status
effect here, whereby people are signalling their socio-economic status by
reviewing higher priced books disproportionately. Or perhaps they just feel more “invested” in
a higher priced book, and thus more willing to spend a few minutes on a review.
Price2
|
1-RevRank > SalesRank
|
2-RevRank = SalesRank
|
3-RevRank < SalesRank
|
Sales Rank minus Rev Rank
|
1-Low
|
11
|
19
|
8.2
|
|
2-Mod
|
26
|
3
|
23
|
-1.7
|
3-High
|
11
|
7
|
-9.1
|
Now we get
into the genre categories, which are usually quite interesting. First up is fiction vs non-fiction. While it
is true that most of the Amazon Top 100 ebooks were fiction, the data does seem
to show and interesting effect, whereby non-fiction readers were
disproportionately more likely to do reviews.
But the numbers are small, so we can only consider this to be a very
provisional result.
Fict_or_NF
|
1-RevRank > SalesRank
|
2-RevRank = SalesRank
|
3-RevRank < SalesRank
|
Sales Rank minus Rev Rank
|
Fiction
|
45
|
2
|
49
|
0.8
|
Non-fiction
|
3
|
1
|
-19.5
|
Here’s a
more detailed look at genre. The main
effect here is that Romance and Thrillers tend to be “under-reviewed” while the
other categories are “over-reviewed”.
Genre1
|
1-RevRank > SalesRank
|
2-RevRank = SalesRank
|
3-RevRank < SalesRank
|
Sales Rank minus Rev Rank
|
Business
|
1
|
0.0
|
||
Historical
Fiction
|
2
|
-24.5
|
||
Humour
|
2
|
-22.5
|
||
LitFic
|
6
|
1
|
4
|
-6.3
|
Religion
|
1
|
-28.0
|
||
Romance
|
19
|
28
|
6.4
|
|
Self-help
|
1
|
-36.0
|
||
SFF
|
6
|
1
|
-28.6
|
|
Thriller/Suspense/Crime
|
11
|
1
|
16
|
4.4
|
A lot of the
categories in the previous table were pretty small, so we will repeat them with
the collapsed genre categories below.
Genre2
|
1-RevRank > SalesRank
|
2-RevRank = SalesRank
|
3-RevRank < SalesRank
|
Sales Rank minus Rev Rank
|
LitFic
|
6
|
1
|
4
|
-6.3
|
Other
|
6
|
1
|
-22.6
|
|
Romance
|
19
|
28
|
6.4
|
|
SFF
|
6
|
1
|
-28.6
|
|
Thriller/Suspense/Crime
|
11
|
1
|
16
|
4.4
|
Again, the
outstanding feature of the data is how Romance and Thrillers don’t get as many
reviews as might be expected from their sales rank. The big winner here seems
to be Science Fiction and Fantasy.
Perhaps it is not surprising that they get a lot of reviews, as the
readers of these categories are often well educated, literate, and confident in
their communication skills. This was
also true of literary fiction, though to a smaller extent.
So, to sum
up:
·
Best-sellers
get fewer reviews than expected, based on sales rank. Books further down the list got more reviews
than one would expect from their sales ranking.
·
Male
writers were slightly more likely to be reviewed, relative to sales, but the
effect was small.
·
Indies
were slightly more likely to be reviewed than Trads, relative to sales, but
again the effect was small.
·
Higher
priced books were reviewed more often than would be expected from sales rank.
The reverse was true for lower priced books.
·
Non-fiction
was reviewed more often than would be expected from sales rank, but the numbers
of such books were too small to be able to say whether this was meaningful.
·
Romance
and Thrillers were not reviewed as much as would be expected from sales rank. Science Fiction and Fantasy were
“over-reviewed” as were the other categories.
No comments:
Post a Comment