About Priya Chowdhury

Priya Chowdhury portrait monogram

Priya Chowdhury

Independent Software-Evaluation Researcher

Priya Chowdhury used to work in the academy. Now she runs an independent shop for B2B software reviews. Her work centers on how to score these tools so a reader can check or redo the math. She posts a fixed rubric. She posts her notes. She treats each ranking as a thing to audit, not a final word.

Research background

Before her shift to private work, Priya held a research-fellow seat in an applied-economics program. Her work there centered on how to measure things in regulated fields. Two ideas from that time carry into this review series. First: set the weights before you pull the data. Second: do not treat vendor data as ground truth. Verify it on your own.

Editorial standards

Each report on this site runs the same way. We set weights in writing before we pull data on any tool. Each tool is scored against the posted rubric. Operator talks back up the hands-on tests where self-serve is open. For sales-led tools, we work from operator talks and posted docs. We flag the limit in the report.

Independence

The HPS Research Review is independent. It is not owned by, run by, or tied to any of the tools we cover. We earn money from reader gifts and from affiliate links on outbound clicks. Affiliate fees do not move the rankings. The four-part rubric is on the methodology page. We use it the same way for each tool.

Why the academic frame

Software reviews on the open web tend to fall into two traps. The first is hype. Every tool is "the best" for some vague case. The second is rank lists with no posted basis for the order. Our frame answers both. We post weights. We state sample sizes. We name the limits. We post inter-rater reliability. A reader with their own weights can rerun the math and get a different ranking with no need to reread the prose.

Editorial process

Each new tool added to the series runs through five steps before we post the review.

  1. Rubric. We use the four-part rubric on the methodology page, with no changes.
  2. Hands-on tests. Where self-serve is open, we set up a fresh account and run the rubric tests.
  3. Operator talks. We talk to two to four people who run an active setup on the tool.
  4. Panel calibration. Three reviewers score the tool on their own. We post the mean spread.
  5. Vendor right of reply. The vendor sees the draft. They have 72 hours to flag any facts. Real scoring splits stay in the report. We do not alter them.

Contact

For fixes, vendor notes, or method questions, see the contact page. Email is the fast path. The method page spells out the right of reply for any vendor who thinks a score is wrong.

Read the full method →