Designing with Data: Improving the User Experience with A/B Testing

Read Online and Download Ebook Designing with Data: Improving the User Experience with A/B Testing

PDF Ebook Designing with Data: Improving the User Experience with A/B Testing

By checking out the web link, you can make the handle the website to obtain the soft documents. Ever before mind, there is no distinction between this kind of soft documents publication as well as the published publication. It will separate only in the types. And just what you will additionally acquire from Designing With Data: Improving The User Experience With A/B Testing soft documents is that it will instruct you the best ways to live your life, the best ways to improve your life, and also ways to overview of be far better.

Designing with Data: Improving the User Experience with A/B Testing

Designing with Data: Improving the User Experience with A/B Testing


Designing with Data: Improving the User Experience with A/B Testing


PDF Ebook Designing with Data: Improving the User Experience with A/B Testing

After couple of time, finally the book that we and also you wait on is coming. So eased to obtain this excellent publication readily available to provide in this internet site. This is the book, the DDD. If you still really feel so difficult to get the printed publication in guide store, you can join with us once more. If you have ever before obtained guide in soft documents from this book, you could easily get it as the referral now.

Postures currently this Designing With Data: Improving The User Experience With A/B Testing as one of your book collection! However, it is not in your cabinet compilations. Why? This is the book Designing With Data: Improving The User Experience With A/B Testing that is provided in soft data. You could download the soft documents of this magnificent book Designing With Data: Improving The User Experience With A/B Testing currently and in the link provided. Yeah, different with the other individuals that look for book Designing With Data: Improving The User Experience With A/B Testing outside, you could obtain less complicated to pose this book. When some individuals still stroll into the shop and look guide Designing With Data: Improving The User Experience With A/B Testing, you are right here just remain on your seat as well as obtain guide Designing With Data: Improving The User Experience With A/B Testing.

When discussing the completed advantages of this publication, you could take the evaluation of this publication. Many testimonials reveal that the readers are so completely satisfied as well as astonished in Designing With Data: Improving The User Experience With A/B Testing They will certainly leave the excellent voices to vote that this is an excellent publication to read. When you are very interested of what they have actually read, your turn is only by analysis. Yeah, reading this publication will be none issues. You can get this book easily as well as review it in your only extra time.

For more interesting factor, you may not know concerning the content of this publication, may you? Why don't you try to recognize? Recognizing brand-new point will bring about conceive the life much better. You might not only check out as the activities, yet reading can be a method to make your life run well. By this Designing With Data: Improving The User Experience With A/B Testing you can really picture exactly how the life will be and need to be.

Designing with Data: Improving the User Experience with A/B Testing

Product details

Paperback: 370 pages

Publisher: O'Reilly Media; 1 edition (April 20, 2017)

Language: English

ISBN-10: 1449334830

ISBN-13: 978-1449334833

Product Dimensions:

6 x 1 x 9.2 inches

Shipping Weight: 1.2 pounds (View shipping rates and policies)

Average Customer Review:

4.2 out of 5 stars

9 customer reviews

Amazon Best Sellers Rank:

#72,432 in Books (See Top 100 in Books)

The strength of this book is that it's written for designers, a group that sometimes considers A/B testing as "competing," with the creative process. The authors point out the complementary value and call the "genius designer" a myth. The weakness of the book is that the statistics are wrong at times, which may mislead readers.I have been using A/B tests and more sophisticated controlled experiments for over a decade, including leading the ExP Platform at Microsoft, which is used to run over 12,000 experiment treatments/year, Some of my work is referenced in this book, so please take this review in the appropriate context.Here are some key points I loved:• Great observations, such as "[Ensure] you’re running meaningful learning tests rather than relying on A/B testing as a “crutch” — that is, where you to stop thinking carefully and critically about your overarching goal( s) and run tests blindly, just because you can.• Nice quotations from multiple people doing A/B testing in the industry• Good observations about insensitive metrics such as NPS, which take "significant change in experience and a long time to change what users think about a company." Another example, which is even more extreme, is stock price. You could run experiments and watch the stock ticker. Good luck with that insensitive metric.• Good observation about metrics that "can't fail," such as clicks on a feature that didn't exist.• Netflix found "a very strong correlation between viewing hours and retention....used viewing hours (or content consumption) as their strongest proxy metric for retention."Coming up with short-term metrics predictive of long-term success is one of the hardest things.• "Deviating significantly from your existing experience requires more resources and effort than making small iterations.”• For those who "worry that A/B testing and using data in the design process might stifle creativity.... generating a large variety of different hypotheses prior to designing forces you and your team to be more creative."Amen• Nice references to Dan McKinley's observations that most features are killed for lack of usage, and that unexciting features, such as "emails to people who gave up in the midst of a purchase had much bigger potential impact to the business."• "…changing something about the algorithm that increases response speed (e.g., content download on mobile devices or in getting search results); users see the same thing but the experience is more responsive, and feels smoother. Although these performance variables aren’t “visible” to the user and may not be part of visual design, these variables strongly influence the user experience."Great point about the importance of performance and the fact that this cannot be measured in prototypes or sketches. We ran multiple "slowdown" experiments to measure the value of perf.• Interesting discussion on the “Painted door” tests and the point that it's a questionable test that misleads users. It's also unable to measure a key metric: repeat usage: once you slam into the painted door, you know not to do it again.• Nice concept of "Experiment 0," the experiment I might run before the one being planned.• "inconclusive result doesn’t mean that you didn’t learn anything. You might have learned that the behavior you were targeting is in fact not as impactful as you were hoping for."• An important point to remember "When analyzing and interpreting your results, remember that A/ B testing shows you behaviors but not why they occurred."• “There is a difference between using data to make and inform decisions in one part of an organization versus having it be universally embraced by the entire organization.”• "One could believe that a designer or product person who doesn’t know the right answer must not have enough experience. Actually it’s almost inversely true. Because I have some experience, I know that we don’t know the right answer until we test."• "steer people away from using phrases like 'my idea is ...' and toward saying 'my hypothesis is...'"• "one of the most important aspects of experimental work is triangulating with other sources and types of data."• The book addresses ethics, rarely discussedHere are some things I didn’t like:• The book is verbose. I read the electronic version, but the paperback is 370 pages, giving a sense of the size.• Very few surprising "eye opening" examples. Several of the papers on exp-platform, such as the Rules of Thumb paper, and the Sept-Oct 2017 HBR article on experimentation have surprising examples showing the humbling value of A/B testing. Th A/B Testing book by Siroker and Koomen have great examples.• The authors fall into a common pitfall of misinterpreting p-values. For example, they write o "a p-value helps quantify the probability of seeing differences observed in the data of your experiment simply by chance."But p-value is a conditional probability, assuming the null (no difference). o "p = 0.05 or less to be statistically significant. This means we have 95% confidence in our result."This is wrong. P-value is conditioned on the Null hypothesis being true o "A false positive is when you conclude that there is a difference between groups based on a test, when in fact there is no difference in the world.... This means that 5% of the time, we will have a false positive." Wrong again. o "around 1 in 20 A/ B experiments will result in a false positive, and therefore a false learning! Worse yet, with every new treatment you add, your error rate will increase by another 5% so with 4 additional treatments, your error rate could be as high as 25%." Both halves are wrong. p-value of 0.05 does not equate to 5% false positive rate, and adding treatments does not linearly add 5%; it's 1-0.95^4 = 18% o "But getting a p-value below that twice due to chance has a probability of much less than 1% — about 1 in every 400 times."1/400 assumes you can multiply the two p-values. Need to use Fisher's combined probability test (meta-analysis)• "Sometimes, one metric is constrained by another. If you’re trying to evaluate your hypotheses on the basis of app open rate and app download rate, for instance, app download rate is the upper bound for app open rate because you must download the app in order to open it. This means that app open rate will require a bigger sample to measure, and you’ll need to have at least that big of a sample in your test."The idea that constrained metrics require larger samples is wrong as phrased. Triggering to smaller populations is highly beneficial in practice. For example, if you make a change to the checkout process, analyze only users who started checking out. While the sample size is smaller, the average treatment effect is larger. Including users who provably have a zero treatment effect is always bad.• Larger companies with many active users generally roll out an A/ B test to 1% or less, because they can afford to keep the experimental group small while still collecting a large enough sample in a reasonable amount of time without underpowering their experiment.”This is the 1% fallacy. Large companies want to be able to detect small differences. If Bing doesn't detect a 0.5% degradation to revenue in a US test, it might not realize the idea is going to lose $15M/year. The experiment must be sufficiently powered to detect small degradations in high-variance metrics like revenue that we care about. Most Bing experiments run at 10-20%, after an initial canary test at 0.5%• "It’s always great to see the results you hoped to see!" The value of an A/B test is the delta between expected and actual results. Some of the best examples are ones where the results are MUCH BETTER than what was expected.• "if you run too many experiments concurrently, you risk having user exposed to multiple variables at the same time, creating experimental confounds such that you will not be able to tell which one is having an impact on behavior, ultimately muddying your results."You can test for interactions. Bing, Booking.com, Facebook, Google, all run hundreds of concurrent experiments. This is a (mostly) solved problem.Thanks, Ron Kohavi

Feels very repetitive by the middle of the book. The quality of the content is lost when points aren't made succinctly.

Great book and content.

I cut my teeth on mail-order marketing, what they now call direct-response. Might even be called something else now. One of the cardinal rules of mail-order marketing was – and remains – test, test, test. You tested everything. The headline, the copy, the color of the paper, its weight, everything until you had tested enough determine the most efficient marketing package. This book’s primary authors are eminently qualified and highly experienced. Even better, they are graceful writers. The authors define “A/B testing [as] a methodology to compare two or more versions of an experience to see which one performs the best relative to some objective measure”. In other words, you test to find out what works best. The book is intended to acquaint designers and product managers in launching digital products using data to guide the product’s refinement. In other words, the book how to show designers and product managers how to use the wealth of data available to better market their product. Over the course of the first six chapters, they do precisely that. This stuff is really good. The authors, one with Spotify in her background, the other with Netflix, truly understand the concept, mechanics and worth of testing. The last two chapters smelled too much like political correctness for my taste and, in my opinion could have been left out without harming the value of the book. If you are not thoroughly experienced with the concept of A/B testing in marketing vehicles, you will benefit from this book.Jerry

QUICK SUMMARY: Probably not a good book to start with if you're new to A/B testing.BACKGROUND: I work in the IT industry as a project manager, so I don't work as closely now with the software development teams or QA teams like I used to. I got this book to learn more about A/B testing just for general knowledge on the topic, and this book does seem written by authoritative authors. But after getting about one-half the way through this 300+ page book, I kinda lost gusto to finish it. That's not a knock on the book's authors or their work, but just me saying that if you're wanting an introduction into A/B testing, this book does offer that, but I just didn't find it written in an engaging style for a non-designer IT worker.The other issue I had were a few of the graphics. They are much too small to be readable in a book this size. I'm of the opinion that either you right-size the graphics so they render well on a printed page, or you just don't include them. Adding graphics that are too small to easily read just aren't worth incorporating into a book. Not all the graphics were too small to read (most were okay, in fact), but the ones that were too small ought to have been redesigned, zoomed-in, or otherwise dealt with. Granted, this is a petty complaint that doesn't deal with the content of this book.

Designing with Data: Improving the User Experience with A/B Testing PDF
Designing with Data: Improving the User Experience with A/B Testing EPub
Designing with Data: Improving the User Experience with A/B Testing Doc
Designing with Data: Improving the User Experience with A/B Testing iBooks
Designing with Data: Improving the User Experience with A/B Testing rtf
Designing with Data: Improving the User Experience with A/B Testing Mobipocket
Designing with Data: Improving the User Experience with A/B Testing Kindle

Designing with Data: Improving the User Experience with A/B Testing PDF

Designing with Data: Improving the User Experience with A/B Testing PDF

Designing with Data: Improving the User Experience with A/B Testing PDF
Designing with Data: Improving the User Experience with A/B Testing PDF

Designing with Data: Improving the User Experience with A/B Testing


Home