OpenFacet

Opting for Pure Data Over Rules of Thumb.

Jul 25, 2025

Diamond pricing, Rules of Thumb, pricing research, Integrity in pricing

In pursuit of greater clarity and transparency in diamond pricing, OpenFacet has conducted a rigorous internal study to examine whether algorithmic logic, specifically Rule of Thumb (ROT) percentage adjustments, can enhance the accuracy of its benchmark pricing model. Our goal has always been to represent the most reliable snapshot of real retail market pricing for diamonds, and to do so with full objectivity and traceability.

This blog post documents the methodology and rationale behind our exploration of Rule of Thumb calculations, explaining why, after careful analysis, we opted to strictly rely on actual data collected from retail platforms, without introducing synthetic rules or correction ranges.

The Hypothesis: Can ROT Smooth and Correct Retail Price Data?

The initial premise of the study was simple: if the retail market contains noise, errors, or gaps in availability, perhaps a well-researched set of ROTs (e.g., “VS1 to VS2 usually drops ~12%”) could be used to smooth the benchmark and fill in anomalies. The team envisioned ROT as a safeguard to auto-correct or interpolate prices when retail data was thin or inconsistent.

Step-by-Step: How the Study Was Conducted
  1. Adjacent Pair Analysis: We began by calculating the actual price differences between adjacent clarity grades (e.g., VVS1 to VVS2) and color grades (e.g., E to F) across multiple carat sizes using verified data. For instance, we plotted percentage differences in price across clarity transitions for sizes 0.3ct up to 5.0ct, clearly observing how those deltas changed with carat size.

    Study analysis source link

  2. Clarity and Color ROT Matrices: From the round brilliant (RB) pricing file, we generated tables of average, minimum, and maximum percentage drops between clarity and color grades. These were structured to support both fixed and range-based ROTs. We also visualized this using bar charts that illustrated the variability of percentage drops by grade and size.

    Study analysis source link

  3. Validation vs. Real Data: To test the ROTs, we compared their compounded predictions (e.g., price from D/IF to E/VVS1) against actual prices retrieved from Round data at various sizes (1ct, 2ct, 3ct, 5ct). In many cases, the ROTs aligned closely with observed data (within 2-5%). These comparisons were charted to show the deviation between ROT-projected and real-world values.

    Study analysis source link

  4. Cushion Cut Extension: We replicated the study using a separate dataset for Cushion Modified Brilliant (CMB) diamonds. Given that CMBs are consistently priced below Rounds, we compared CMB vs. RB prices and computed discount percentages per cell. A ROT range for the discount was derived and plotted across carat sizes, showing consistent downward deviations ranging from −20% to over −45%, especially in sizes between 1ct and 3ct.

    Study analysis source link

  5. Integration Challenge: While many ROT values tracked well with observed trends, gaps in retail coverage, inconsistent pricing behavior in less liquid categories, and overcorrection risks led us to a key question: Should any formula override real-world pricing?

Final Team Decision: Trust the Data, Not the Shortcut.

After analyzing thousands of data points and extensive internal debate, the OpenFacet team concluded that the best path forward is to trust the data as it is, without enforcing rules that may become outdated or misaligned with market behavior.

Although ROTs offered value as a concept, we observed that even a well-designed ROT system risks masking real supply/demand fluctuations and could undermine our commitment to transparency. OpenFacet’s mission is not to fit the market to a model, but to reveal the market exactly as it behaves.

Where data is thin or unavailable, the team has taken a firm position to widen and intensify the scope of retail data scraping rather than rely on ROT-based extrapolations. We believe expanding real data coverage is a more principled and sustainable solution.

What This Means for Users

Our benchmark prices reflect a pure aggregation of best-available public retail prices, filtered through our disclosure-driven algorithm. No hand-tuning. No embedded assumptions. No artificial smoothing.

Users should be confident that OpenFacet pricing is:

- Backed by real, observable market listings

- Continuously updated

- Free from commercial or model-based bias

Conclusion

In an industry long plagued by opacity and subjectivity, OpenFacet is choosing the harder path: relying exclusively on real-time market data, regardless of its fragmentation or imperfection. By avoiding synthetic corrections, we preserve both the integrity and the credibility of our pricing model.

As always, we remain open to evolution. If future data science advances enable risk-aware, statistically sound corrections, we will re-evaluate. For now, the truth lies in the data itself.

Tags: #Research