What "87 out of 100 robots would flag this" actually means.
When SkinVault scans a photo, the AI outputs a confidence score between 0% and 100%. This represents how certain the model is that the photo contains sensitive content.
Think of it this way: if we ran 100 slightly different versions of our AI model on the same photo, 87 of them would flag it as sensitive. The higher the number, the more certain the detection.
Real-world photos aren't always clear-cut. A photo might be ambiguous - swimwear at a beach, artistic photography, medical images. The confidence score tells you how sure the AI is, so you can make informed decisions.
No AI is perfect. There are two types of mistakes:
AI flags a safe photo as sensitive
Annoying, but harmless
AI misses an actual sensitive photo
Defeats the purpose
There's always a tradeoff between these two. You can't minimize both at the same time:
SkinVault is designed to err on the side of caution. Here's why:
The cost of missing a sensitive photo is higher than the cost of reviewing a safe one.
Consider the consequences:
This is why our default threshold is relatively sensitive. We'd rather flag a few extra beach photos than miss something actually sensitive.
Medical tests are designed to catch potential issues even if it means some false alarms. A mammogram might flag something that turns out to be benign - that's better than missing actual cancer. Same principle here.
You can adjust how sensitive the detection is. Lower thresholds catch more (including more false positives). Higher thresholds are stricter (but might miss borderline cases).
In the app, you can review flagged photos and mark safe ones. The more you review, the better you'll understand where your personal comfort level is.
The AI is very sure this photo contains sensitive content. These are rarely false positives.
Likely sensitive, but could be ambiguous cases like swimwear, artistic shots, or partial views. Worth reviewing.
Borderline cases. The AI isn't sure. These have higher false positive rates but are flagged to be safe.
The AI doesn't think this is sensitive. Not flagged by default, but you can lower your threshold to include these.
Confidence scores give you transparency into the AI's thinking. Instead of a mysterious black box that just says "yes" or "no," you see exactly how certain the model is.
We default to catching more rather than missing things, because:
The robots are here to help - and they'd rather be a little overcautious than miss something important.
Remember: You're always in control. The AI flags, you decide. Every flagged photo is a suggestion, not a judgment.