{"id":35594,"date":"2026-05-04T08:24:33","date_gmt":"2026-05-04T08:24:33","guid":{"rendered":"https:\/\/www.nvecta.com\/blog\/?p=35594"},"modified":"2026-05-04T08:27:45","modified_gmt":"2026-05-04T08:27:45","slug":"measure-recommendation-engine-lift-metric-2026","status":"publish","type":"post","link":"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/","title":{"rendered":"How to Measure Recommendation Engine Lift (Metrics That Matter in 2026)"},"content":{"rendered":"\n<p>Measure  recommendation engine lift is the extra business value your recommender brings compared to a baseline (no recommendations, popularity-based picks, or an older model).<\/p>\n\n\n\n<p>You measure it through controlled A\/B tests that compare a treatment group seeing the new recommender against a control group, then track lift on revenue per user, click-through rate, conversion, and retention.<\/p>\n\n\n\n<p>If you only remember one thing: lift is a comparison, not a number you can pull from a dashboard in isolation.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-transparent ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#TLDR\" >TL;DR<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Why_Im_Writing_This\" >Why I&#8217;m Writing This<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#What_Is_Recommendation_Engine_Lift\" >What Is Recommendation Engine Lift?<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#The_Simple_Formula\" >The Simple Formula<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Lift_vs_Raw_Performance\" >Lift vs. Raw Performance<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Why_Measuring_Lift_Actually_Matters\" >Why Measuring Lift Actually Matters<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#What_you_Risk_without_It\" >What you Risk without It<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#How_to_Measure_Recommendation_Engine_Lift_Step_by_Step\" >How to Measure Recommendation Engine Lift: Step by Step<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-9\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Step_1_Define_One_Primary_Metric\" >Step 1: Define One Primary Metric<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-10\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Step_2_Pick_a_Baseline\" >Step 2: Pick a Baseline<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-11\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Step_3_Set_up_the_AB_test\" >Step 3: Set up the A\/B test<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-12\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Step_4_Run_Offline_Evaluation_in_Parallel\" >Step 4: Run Offline Evaluation in Parallel<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-13\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Step_5_Measure_Online_During_the_Test\" >Step 5: Measure Online During the Test<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-14\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Step_6_Calculate_Lift_with_Confidence_Intervals\" >Step 6: Calculate Lift with Confidence Intervals<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-15\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Step_7_Decide_Document_Decide_Again\" >Step 7: Decide, Document, Decide Again<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-16\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#The_Metrics_That_Actually_Matter_Ranked_Honestly\" >The Metrics That Actually Matter (Ranked Honestly)<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-17\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Tier_1_Business_Outcome_Metrics\" >Tier 1: Business Outcome Metrics<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-18\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Tier_2_Conversion_Metrics\" >Tier 2: Conversion Metrics<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-19\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Tier_3_Engagement_Metrics\" >Tier 3: Engagement Metrics<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-20\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Tier_4_Model_Health_Metrics\" >Tier 4: Model Health Metrics<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-21\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Real-World_Use_Cases\" >Real-World Use Cases<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-22\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Ecommerce_Beating_a_Popularity_Baseline\" >Ecommerce: Beating a Popularity Baseline<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-23\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Streaming_Watch_Time_Over_CTR\" >Streaming: Watch Time Over CTR<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-24\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#B2B_SaaS_Recommending_workflows\" >B2B SaaS: Recommending workflows<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-25\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Best_Tools_and_Platforms_for_Measuring_Lift\" >Best Tools and Platforms for Measuring Lift<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-26\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Common_Mistakes_Ive_Made_Most_of_These\" >Common Mistakes (I&#8217;ve Made Most of These)<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-27\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Quick_Summary\" >Quick Summary<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-28\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Key_Takeaways\" >Key Takeaways<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-29\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#How_NVECTA_Helps_Teams_Measure_Lift_Properly\" >How NVECTA Helps Teams Measure Lift Properly<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-30\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#What_is_a_good_lift_for_a_recommendation_engine\" >What is a good lift for a recommendation engine?<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-31\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#How_long_should_an_AB_test_for_a_recommender_run\" >How long should an A\/B test for a recommender run?<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-32\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Whats_the_difference_between_offline_and_online_evaluation\" >What&#8217;s the difference between offline and online evaluation?<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-33\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Can_I_measure_lift_without_an_AB_test\" >Can I measure lift without an A\/B test?<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-34\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Why_does_CTR_go_up_but_revenue_stays_flat\" >Why does CTR go up but revenue stays flat?<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-35\" href=\"https:\/\/www.nvecta.com\/blog\/measure-recommendation-engine-lift-metric-2026\/#Whats_the_most_underrated_metric_for_recommendation_engines\" >What&#8217;s the most underrated metric for recommendation engines?<\/a><\/li><\/ul><\/li><\/ul><\/nav><\/div>\n<h2 class=\"wp-block-heading\" id=\"tl-dr\"><span class=\"ez-toc-section\" id=\"TLDR\"><\/span><strong>TL;DR<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Lift = (Treatment metric \u2212 Control metric) \u00f7 Control metric.<\/li>\n\n\n\n<li>Always run a holdout group. Without one, your &#8220;lift&#8221; is just noise dressed up as a win.<\/li>\n\n\n\n<li>Offline metrics (NDCG, Recall@K, MAP) help you ship; online metrics (RPV, CTR, AOV, retention) decide if it actually works.<\/li>\n\n\n\n<li>The metrics that matter depend on what your business sells. A streaming app cares about watch time. A grocery app cares about basket size.<\/li>\n\n\n\n<li>Most teams overweight CTR. Lift on revenue and retention matters more.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"why-i-m-writing-this\"><span class=\"ez-toc-section\" id=\"Why_Im_Writing_This\"><\/span><strong>Why I&#8217;m Writing This<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<p>I&#8217;ve watched too many teams ship a &#8220;smarter&#8221; recommender, see CTR jump 8%, throw a party, and then quietly notice three months later that revenue didn&#8217;t move. <\/p>\n\n\n\n<p>Or worse, churn went up because users were getting recommendations that felt addictive but unsatisfying.<\/p>\n\n\n\n<p>So this guide is about measuring lift in a way that actually tells you something useful. Not just numbers that look good in a slide deck.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"what-is-recommendation-engine-lift\"><span class=\"ez-toc-section\" id=\"What_Is_Recommendation_Engine_Lift\"><\/span><strong>What Is Recommendation Engine Lift?<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<p>Recommendation engine lift is the percentage gain in a target metric when users see your recommender&#8217;s output, compared to users who see a baseline experience. <\/p>\n\n\n\n<p>The baseline could be no recommendations, top-sellers, hand-picked editorial slots, or your previous model.<\/p>\n\n\n\n<p>Think of it like a before-and-after photo\u2014except both images are captured at the same moment, using two different groups of people. That simultaneous comparison is what makes lift trustworthy across the entire <a href=\"https:\/\/www.invitereferrals.com\/blog\/customer-journey\/\" target=\"_blank\" rel=\"noopener\">customer journey<\/a>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"the-simple-formula\"><span class=\"ez-toc-section\" id=\"The_Simple_Formula\"><\/span><strong>The Simple Formula<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>Lift % = ((Treatment metric \u2212 Control metric) \/ Control metric) \u00d7 100<\/p>\n\n\n\n<p>Example: control group converts at 3.2%, treatment group converts at 3.7%. Lift = (3.7 \u2212 3.2) \/ 3.2 \u00d7 100 = <strong>15.6% lift<\/strong>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"lift-vs-raw-performance\"><span class=\"ez-toc-section\" id=\"Lift_vs_Raw_Performance\"><\/span><strong>Lift vs. Raw Performance<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>A raw metric tells you how the recommender did. Lift tells you how much <em>better<\/em> it did than the alternative. Big difference. <\/p>\n\n\n\n<p>A recommender with 5% CTR sounds great until you find out the popularity-based baseline gets 4.8%.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"why-measuring-lift-actually-matters\"><span class=\"ez-toc-section\" id=\"Why_Measuring_Lift_Actually_Matters\"><\/span><strong>Why Measuring Lift Actually Matters<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<p>Here&#8217;s the uncomfortable truth: most recommendation engines don&#8217;t earn their keep. They just look busy.<\/p>\n\n\n\n<p>Without proper lift measurement, you can&#8217;t tell if:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Your recommender is doing real work or just surfacing what users would have bought anyway<\/li>\n\n\n\n<li>The added complexity (model training, feature pipelines, infra cost) is worth it<\/li>\n\n\n\n<li>A simpler baseline would do the same job at a fraction of the cost<\/li>\n<\/ul>\n\n\n\n<p>Stripe, Netflix, Amazon, and Spotify all run continuous A\/B testing on recommendation changes. <\/p>\n\n\n\n<p>Even one of their best-known models gets shipped only after the lift numbers clear a bar. If they need that discipline, so does your team.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"what-you-risk-without-it\"><span class=\"ez-toc-section\" id=\"What_you_Risk_without_It\"><\/span><strong>What you Risk without It<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Wasted engineering budget<\/strong> on models that don&#8217;t pay back<\/li>\n\n\n\n<li><strong>Optimizing the wrong thing<\/strong> (CTR up, revenue flat, satisfaction down)<\/li>\n\n\n\n<li><strong>Compounding errors<\/strong> when one untested model feeds another<\/li>\n\n\n\n<li><strong>Losing trust<\/strong> with leadership who eventually notice the disconnect<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"how-to-measure-recommendation-engine-lift-step-by-step\"><span class=\"ez-toc-section\" id=\"How_to_Measure_Recommendation_Engine_Lift_Step_by_Step\"><\/span><strong>How to Measure Recommendation Engine Lift: Step by Step<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<p>This is the workflow most mature data teams follow. Adjust to your own setup, but don&#8217;t skip steps.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"step-1-define-one-primary-metric\"><span class=\"ez-toc-section\" id=\"Step_1_Define_One_Primary_Metric\"><\/span><strong>Step 1: Define One Primary Metric<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>Pick the single metric that, if it moves, leadership will agree the recommender worked. For ecommerce, this is usually <strong>revenue per visitor (RPV)<\/strong>. For media, <strong>watch time<\/strong> or <strong>session length<\/strong>. For SaaS feed products, <strong>30-day retention<\/strong>.<\/p>\n\n\n\n<p>Secondary metrics (CTR, AOV, items per session) are useful guardrails\u2014but without a single north star, analysis quickly turns into a choose-your-own-adventure. <\/p>\n\n\n\n<p>Anchor your existing framework around <a href=\"https:\/\/www.nvecta.com\/blog\/customer-engagement-metrics\/\">customer engagement metrics<\/a> as that north star. This keeps the focus on how actively and meaningfully users interact with your product, while the secondary metrics continue to act as supporting signals rather than competing priorities.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"step-2-pick-a-baseline\"><span class=\"ez-toc-section\" id=\"Step_2_Pick_a_Baseline\"><\/span><strong>Step 2: Pick a Baseline<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>Your baseline should reflect what users would see <em>without<\/em> your new recommender. Common choices:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Baseline type<\/strong><\/td><td><strong>When to use it<\/strong><\/td><\/tr><tr><td>No recommendations<\/td><td>Brand new feature, never had a recommender<\/td><\/tr><tr><td>Random items<\/td><td>Sanity check, rarely used in production<\/td><\/tr><tr><td>Popularity-based<\/td><td>Strong default; surprisingly hard to beat<\/td><\/tr><tr><td>Editorial \/ hand-picked<\/td><td>When humans currently curate the slots<\/td><\/tr><tr><td>Previous model<\/td><td>Iterating on an existing system<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>A lot of teams skip past popularity baselines because they &#8220;feel too simple.&#8221; That&#8217;s exactly why you should test against them. If your deep learning model can&#8217;t beat a top-sellers list, you have a problem.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"step-3-set-up-the-a-b-test\"><span class=\"ez-toc-section\" id=\"Step_3_Set_up_the_AB_test\"><\/span><strong>Step 3: Set up the A\/B test<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>Split users (not sessions, not impressions) into control and treatment. Use stable hashing on user_id so the same person stays in the same bucket across visits. <\/p>\n\n\n\n<p>This matters more than people think; otherwise users see both experiences and the comparison falls apart.<\/p>\n\n\n\n<p>Decide:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Sample size<\/strong>: Use a power calculator. For a 5% relative lift on a 3% conversion baseline, you typically need tens of thousands of users per arm.<\/li>\n\n\n\n<li><strong>Test duration<\/strong>: Minimum one full business cycle. For most consumer apps that&#8217;s two weeks. Longer if you have weekend or payday effects.<\/li>\n\n\n\n<li><strong>Stopping rules<\/strong>: Pre-commit. Peeking and stopping early is how teams convince themselves of fake wins.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"step-4-run-offline-evaluation-in-parallel\"><span class=\"ez-toc-section\" id=\"Step_4_Run_Offline_Evaluation_in_Parallel\"><\/span><strong>Step 4: Run Offline Evaluation in Parallel<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>Before the A\/B test, evaluate your model offline on held-out historical data. Standard offline metrics:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Recall@K<\/strong> \u2014 did the recommender include the item the user actually picked, in the top K?<\/li>\n\n\n\n<li><strong>NDCG@K<\/strong> \u2014 same idea, weighted so higher positions count more<\/li>\n\n\n\n<li><strong>MAP (Mean Average Precision)<\/strong> \u2014 average of precision values across the ranked list<\/li>\n\n\n\n<li><strong>Hit Rate@K<\/strong> \u2014 fraction of users with at least one correct recommendation in top K<\/li>\n\n\n\n<li><strong>Coverage<\/strong> \u2014 what share of your catalog ever gets recommended<\/li>\n\n\n\n<li><strong>Diversity \/ novelty<\/strong> \u2014 are you just showing the same five hits to everyone<\/li>\n<\/ul>\n\n\n\n<p>Offline metrics are necessary but not sufficient. They correlate with online lift, but the correlation is far from perfect. Plenty of models look amazing offline and flop in production.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"step-5-measure-online-during-the-test\"><span class=\"ez-toc-section\" id=\"Step_5_Measure_Online_During_the_Test\"><\/span><strong>Step 5: Measure Online During the Test<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>Track these continuously while the test runs:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Engagement metrics<\/strong>: CTR, items viewed per session, dwell time<\/li>\n\n\n\n<li><strong>Conversion metrics<\/strong>: add-to-cart rate, purchase rate, conversion lift<\/li>\n\n\n\n<li><strong>Revenue metrics<\/strong>: revenue per visitor, AOV, gross merchandise value<\/li>\n\n\n\n<li><strong>Retention metrics<\/strong>: D7, D30 return rate, churn rate<\/li>\n\n\n\n<li><strong>Quality \/ guardrail metrics<\/strong>: customer support tickets, refund rate, complaint rate<\/li>\n<\/ol>\n\n\n\n<p>Watch the guardrails. A CTR boost paired with a refund spike usually means your recommender learned to bait users.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"step-6-calculate-lift-with-confidence-intervals\"><span class=\"ez-toc-section\" id=\"Step_6_Calculate_Lift_with_Confidence_Intervals\"><\/span><strong>Step 6: Calculate Lift with Confidence Intervals<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>Don&#8217;t just report a point estimate. Report:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Lift % with a 95% confidence interval<\/li>\n\n\n\n<li>p-value or Bayesian probability of being better than control<\/li>\n\n\n\n<li>Effect size in business terms (extra revenue per month if rolled out)<\/li>\n<\/ul>\n\n\n\n<p>A 4% lift with a CI of [-1%, +9%] is not a win. It&#8217;s a maybe. Treat it that way.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"step-7-decide-document-decide-again\"><span class=\"ez-toc-section\" id=\"Step_7_Decide_Document_Decide_Again\"><\/span><strong>Step 7: Decide, Document, Decide Again<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>If lift is real and material, ship to 100%. If it&#8217;s marginal, run a longer test or kill it. Either way, write down what you tested, what moved, what didn&#8217;t, and your hypothesis for why. Future you will thank present you.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"the-metrics-that-actually-matter-ranked-honestly\"><span class=\"ez-toc-section\" id=\"The_Metrics_That_Actually_Matter_Ranked_Honestly\"><\/span><strong>The Metrics That Actually Matter (Ranked Honestly)<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<p>Not all metrics carry the same importance\u2014especially when evaluating consumer products. Here\u2019s how I\u2019d prioritize them:<\/p>\n\n\n\n<p>Anchor this approach around your <a href=\"https:\/\/www.nvecta.com\/blog\/what-is-customer-data-platform-cdp\/\">customer data platform<\/a>, which brings together insights across touchpoints and helps you see the full picture of user behavior.<\/p>\n\n\n\n<p>From there, build on your existing content by naturally integrating these priorities\u2014ensuring your metrics framework feels cohesive rather than forced.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"tier-1-business-outcome-metrics\"><span class=\"ez-toc-section\" id=\"Tier_1_Business_Outcome_Metrics\"><\/span><strong>Tier 1: Business Outcome Metrics<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Revenue per visitor (RPV)<\/strong> \u2014 the cleanest signal for ecommerce<\/li>\n\n\n\n<li><strong>Long-term retention (D30, D90)<\/strong> \u2014 the cleanest signal for content and media<\/li>\n\n\n\n<li><strong>Lifetime value (LTV) lift<\/strong> \u2014 the gold standard, hard to measure short-term<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"tier-2-conversion-metrics\"><span class=\"ez-toc-section\" id=\"Tier_2_Conversion_Metrics\"><\/span><strong>Tier 2: Conversion Metrics<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Conversion rate<\/strong><\/li>\n\n\n\n<li><strong>Add-to-cart rate<\/strong><\/li>\n\n\n\n<li><strong>Average order value (AOV)<\/strong><\/li>\n\n\n\n<li><strong>Subscription start \/ upgrade rate<\/strong><\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"tier-3-engagement-metrics\"><span class=\"ez-toc-section\" id=\"Tier_3_Engagement_Metrics\"><\/span><strong>Tier 3: Engagement Metrics<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>CTR on recommended slots<\/strong><\/li>\n\n\n\n<li><strong>Items per session<\/strong><\/li>\n\n\n\n<li><strong>Session length \/ watch time<\/strong><\/li>\n\n\n\n<li><strong>Scroll depth<\/strong><\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"tier-4-model-health-metrics\"><span class=\"ez-toc-section\" id=\"Tier_4_Model_Health_Metrics\"><\/span><strong>Tier 4: Model Health Metrics<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Coverage<\/strong> \u2014 are you only ever recommending 3% of your catalog?<\/li>\n\n\n\n<li><strong>Diversity<\/strong> \u2014 variety within a single user&#8217;s recommendations<\/li>\n\n\n\n<li><strong>Novelty<\/strong> \u2014 recommending items the user hasn&#8217;t seen yet<\/li>\n\n\n\n<li><strong>Serendipity<\/strong> \u2014 relevant but unexpected picks<\/li>\n<\/ul>\n\n\n\n<p>The trap: optimizing Tier 3 in isolation. Engagement is easy to move; revenue and retention are harder. If you can&#8217;t connect a CTR win to a Tier 1 metric, you probably haven&#8217;t won anything.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"real-world-use-cases\"><span class=\"ez-toc-section\" id=\"Real-World_Use_Cases\"><\/span><strong>Real-World Use Cases<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"ecommerce-beating-a-popularity-baseline\"><span class=\"ez-toc-section\" id=\"Ecommerce_Beating_a_Popularity_Baseline\"><\/span><strong>Ecommerce: Beating a Popularity Baseline<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>A mid-size fashion retailer ran a personalized &#8220;you may also like&#8221; model against a top-sellers baseline. <\/p>\n\n\n\n<p>Offline NDCG looked great (+22%). Online CTR jumped 14%. Revenue per visitor moved 1.1%, barely outside the noise floor. <\/p>\n\n\n\n<p>The recommender was working \u2014 just not enough to justify the model serving cost. <\/p>\n\n\n\n<p>They simplified the system, kept popularity-based recommendations for cold-start users, and only introduced personalization once users had 5+ events. <\/p>\n\n\n\n<p>The result: same revenue, but at half the infrastructure cost\u2014an approach that aligns well with how an <a href=\"https:\/\/www.nvecta.com\/blog\/what-is-ecommerce-cdp-benefits-guide\/\">ecommerce CDP<\/a> can leverage existing content efficiently.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"streaming-watch-time-over-ctr\"><span class=\"ez-toc-section\" id=\"Streaming_Watch_Time_Over_CTR\"><\/span><strong>Streaming: Watch Time Over CTR<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>A video platform optimized purely for click-through. CTR went up 18%. Then they noticed average watch time dropped \u2014 users were clicking thumbnails, watching 30 seconds, bouncing. <\/p>\n\n\n\n<p>Switching the optimization target to &#8220;completed views&#8221; dropped CTR but raised total watch time and 7-day retention. They were optimizing against themselves the whole time.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"b2b-saas-recommending-workflows\"><span class=\"ez-toc-section\" id=\"B2B_SaaS_Recommending_workflows\"><\/span><strong>B2B SaaS: Recommending workflows<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>A project management tool started recommending templates based on user behavior. Click rate on suggestions was modest (4.2%) but users who clicked had 28% higher 90-day retention. <\/p>\n\n\n\n<p>The lift wasn&#8217;t in immediate engagement \u2014 it was in habit formation. They had to wait 90 days to see the real win.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"best-tools-and-platforms-for-measuring-lift\"><span class=\"ez-toc-section\" id=\"Best_Tools_and_Platforms_for_Measuring_Lift\"><\/span><strong>Best Tools and Platforms for Measuring Lift<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<p>You don&#8217;t need a custom-built measurement stack. The ecosystem is solid in 2026.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Category<\/strong><\/td><td><strong>Tools<\/strong><\/td><td><strong>Best for<\/strong><\/td><\/tr><tr><td>Experimentation platforms<\/td><td>Statsig, Eppo, Optimizely, GrowthBook<\/td><td>Running A\/B tests with proper stats<\/td><\/tr><tr><td>Recommender frameworks<\/td><td>TensorFlow Recommenders, PyTorch, RecBole, Merlin<\/td><td>Building and evaluating models<\/td><\/tr><tr><td>Vector databases<\/td><td>Pinecone, Weaviate, Qdrant, pgvector<\/td><td>Retrieval layer for embedding-based recs<\/td><\/tr><tr><td>Analytics<\/td><td>Amplitude, Mixpanel, PostHog<\/td><td>Tracking online metrics over time<\/td><\/tr><tr><td>Managed personalization<\/td><td>Algolia Recommend, AWS Personalize, Dynamic Yield<\/td><td>Teams without in-house ML<\/td><\/tr><tr><td>End-to-end personalization platforms<\/td><td>NVECTA<\/td><td>Teams that want lift measurement, model serving, and experimentation in one stack<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>NVECTA is worth a look if your team is small or if your data scientists are spending more time on infra than on modeling. <\/p>\n\n\n\n<p>It handles the experimentation layer, A\/B testing, and online metric tracking in one place, which removes a lot of the duct tape teams usually end up writing themselves.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"common-mistakes-i-ve-made-most-of-these\"><span class=\"ez-toc-section\" id=\"Common_Mistakes_Ive_Made_Most_of_These\"><\/span><strong>Common Mistakes (I&#8217;ve Made Most of These)<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<p><strong>Trusting offline metrics too much.<\/strong> A 30% lift in NDCG means almost nothing if it doesn&#8217;t show up in revenue. Offline is for ranking candidate models, not for declaring victory.<\/p>\n\n\n\n<p><strong>Peeking at A\/B tests.<\/strong> Looking at results daily and stopping when you &#8220;see a winner&#8221; inflates false positive rates dramatically. Pre-commit to a sample size and stick to it.<\/p>\n\n\n\n<p><strong>Ignoring novelty effects.<\/strong> Users react to anything new. The first week of a test often shows inflated lift that fades. Run long enough to see the new-toy effect wear off.<\/p>\n\n\n\n<p><strong>Measuring only short-term lift.<\/strong> A recommender that boosts impulse buys this week but trains users to ignore your emails next month is not a winning recommender.<\/p>\n\n\n\n<p><strong>Comparing apples to oranges.<\/strong> Your treatment and control populations need to be statistically equivalent. If your hashing is broken, all bets are off.<\/p>\n\n\n\n<p><strong>Forgetting cold-start users.<\/strong> New users have no history. Your model might look great on power users and terrible on new ones, but the average looks fine. Always segment.<\/p>\n\n\n\n<p><strong>Skipping guardrail metrics.<\/strong> If support tickets jump or refunds spike, the lift might be costing you more than it earns.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"quick-summary\"><span class=\"ez-toc-section\" id=\"Quick_Summary\"><\/span><strong>Quick Summary<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<p>Measuring recommendation engine lift comes down to one discipline: comparing what you have against what you might have, in real conditions, with real users, using metrics that connect to revenue or retention. Everything else is theater.<\/p>\n\n\n\n<p>Pick one north-star metric. Run a clean A\/B test against a meaningful baseline. Watch your guardrails. Report lift with confidence intervals. Ship if it works, kill it if it doesn&#8217;t.<\/p>\n\n\n\n<p>The model isn&#8217;t the point. The lift is.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"key-takeaways\"><span class=\"ez-toc-section\" id=\"Key_Takeaways\"><\/span><strong>Key Takeaways<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Lift is always a comparison; pick your baseline carefully<\/li>\n\n\n\n<li>Run user-level A\/B tests, not session-level<\/li>\n\n\n\n<li>Offline metrics rank candidates; online metrics decide winners<\/li>\n\n\n\n<li>Revenue per visitor and retention beat CTR every time<\/li>\n\n\n\n<li>Always include guardrail metrics (refunds, complaints, support tickets)<\/li>\n\n\n\n<li>Pre-commit to sample size and test duration; don&#8217;t peek<\/li>\n\n\n\n<li>Segment results by user type (new, returning, power users)<\/li>\n\n\n\n<li>A simpler baseline you can&#8217;t beat is itself a useful finding<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"how-nvecta-helps-teams-measure-lift-properly\"><span class=\"ez-toc-section\" id=\"How_NVECTA_Helps_Teams_Measure_Lift_Properly\"><\/span><strong>How NVECTA Helps Teams Measure Lift Properly<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<p>If you&#8217;re tired of stitching together experimentation tools, model serving, and analytics dashboards just to figure out whether your recommender is working, that&#8217;s exactly the problem NVECTA was built to solve.<\/p>\n\n\n\n<p>NVECTA gives product and data teams a single place to deploy recommendation models, run controlled A\/B tests with proper statistical rigor, and track lift on the metrics your business actually cares about \u2014 not just CTR. You get cleaner experiments, faster iteration, and lift numbers your CFO will actually trust.<\/p>\n\n\n\n<p><strong>Want to see what real lift looks like for your product?<\/strong> <strong>[Talk to the NVECTA team \u2192]<\/strong> Get a custom lift analysis on your current recommender setup, free.<\/p>\n\n\n<div id=\"rank-math-faq\" class=\"rank-math-block\">\n<div class=\"rank-math-list \">\n<div id=\"faq-question-1777882077942\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><span class=\"ez-toc-section\" id=\"What_is_a_good_lift_for_a_recommendation_engine\"><\/span><strong>What is a good lift for a recommendation engine?<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>It depends on your baseline and industry, but a 2\u20135% lift in revenue per visitor over a strong baseline is a real win for mature ecommerce sites. Early-stage products replacing no recommendations at all often see 10\u201330% lift. If your reported lift sounds too good, double-check your test setup before celebrating.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1777882165447\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><span class=\"ez-toc-section\" id=\"How_long_should_an_AB_test_for_a_recommender_run\"><\/span><strong>How long should an A\/B test for a recommender run?<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Minimum two weeks for most consumer products, longer if your business has weekly or monthly cycles. You need to clear novelty effects (the first few days when anything new looks good) and capture enough conversions to reach statistical significance. Don&#8217;t stop early just because you see a positive trend.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1777882186715\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><span class=\"ez-toc-section\" id=\"Whats_the_difference_between_offline_and_online_evaluation\"><\/span><strong>What&#8217;s the difference between offline and online evaluation?<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Offline evaluation uses historical data to score candidate models on metrics like NDCG and Recall@K before launch. Online evaluation runs real users through real recommendations in an A\/B test. Offline is fast and cheap; online is the truth. Use offline to pick which models to test, online to decide which to ship.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1777882207175\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><span class=\"ez-toc-section\" id=\"Can_I_measure_lift_without_an_AB_test\"><\/span><strong>Can I measure lift without an A\/B test?<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Sort of, but not really. Pre\/post comparisons, synthetic control groups, and causal inference methods can approximate lift when A\/B testing isn&#8217;t possible, but they all rely on assumptions that often break. If you can run a proper A\/B test, do it. Tools like NVECTA make this straightforward even for small teams.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1777882244512\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><span class=\"ez-toc-section\" id=\"Why_does_CTR_go_up_but_revenue_stays_flat\"><\/span><strong>Why does CTR go up but revenue stays flat?<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Usually because your recommender is surfacing items users would have bought anyway, or boosting clicks on cheap items while suppressing higher-value ones. CTR rewards engagement; revenue rewards conversion at value. Always pair CTR with at least one downstream metric like RPV or AOV before declaring a win.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1777882268954\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><span class=\"ez-toc-section\" id=\"Whats_the_most_underrated_metric_for_recommendation_engines\"><\/span><strong>What&#8217;s the most underrated metric for recommendation engines?<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Coverage. If your recommender only ever shows the top 3% of your catalog, you&#8217;re concentrating risk and starving the long tail. Healthy recommenders pull from a much wider slice of inventory, which protects against trends fading and helps surface profitable niche items.<\/p>\n\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Measure recommendation engine lift is the extra business value your recommender brings compared to a baseline (no recommendations, popularity-based picks, or an older model). You measure it through controlled A\/B tests that compare a treatment group seeing the new recommender against a control group, then track lift on revenue per user, click-through rate, conversion, and [&hellip;]<\/p>\n","protected":false},"author":25,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_gspb_post_css":"","footnotes":""},"categories":[129],"tags":[],"class_list":["post-35594","post","type-post","status-publish","format-standard","hentry","category-marketing"],"_links":{"self":[{"href":"https:\/\/www.nvecta.com\/blog\/wp-json\/wp\/v2\/posts\/35594","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.nvecta.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.nvecta.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.nvecta.com\/blog\/wp-json\/wp\/v2\/users\/25"}],"replies":[{"embeddable":true,"href":"https:\/\/www.nvecta.com\/blog\/wp-json\/wp\/v2\/comments?post=35594"}],"version-history":[{"count":2,"href":"https:\/\/www.nvecta.com\/blog\/wp-json\/wp\/v2\/posts\/35594\/revisions"}],"predecessor-version":[{"id":35609,"href":"https:\/\/www.nvecta.com\/blog\/wp-json\/wp\/v2\/posts\/35594\/revisions\/35609"}],"wp:attachment":[{"href":"https:\/\/www.nvecta.com\/blog\/wp-json\/wp\/v2\/media?parent=35594"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.nvecta.com\/blog\/wp-json\/wp\/v2\/categories?post=35594"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.nvecta.com\/blog\/wp-json\/wp\/v2\/tags?post=35594"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}