There have been a lot of complaints about both the competency and the logic behind the latest Epstein archive release by the DoJ: from censoring the names of co-conspirators to censoring pictures o…
I tried to leave a comment, but it doesn’t seem to be showing up there.
I’ll just leave it here:
too tired to look into this, one suggestion though - since the hangup seems to be comparing an L and a 1, maybe you need to get into per-pixel measurements. This might be necessary if the effectiveness of ML or OCR models isn’t at least 99.5% for a document containing thousands of ambiguous L’s. Any inaccuracies from an ML or OCR model will leave you guessing 2^N candidates which becomes infeasible quickly. Maybe reverse engineering the font rendering by creating an exact replica of the source image? I trust some talented hacker will nail this in no time.
i also support the idea to check for pdf errors using a stream decoder.
2^78 is large but computers can do an awful lot per second, so if only about some the pages contain attachments 2^40-55 is something you could bruteforce in weeks if you can do millions of attempts a second
I have never looked into the details of an OCR, but if it’s a classifier it should give the it’s confidence in being a 1 or L so you can start with the low confidence characters.
I tried to leave a comment, but it doesn’t seem to be showing up there.
I’ll just leave it here:
i also support the idea to check for pdf errors using a stream decoder.
How big is N though?
64
Since there’s 78 pages, I’m guessing at least 1 ambiguity per page? Anyways, it’s dreadfully big.
2^78 is large but computers can do an awful lot per second, so if only about some the pages contain attachments 2^40-55 is something you could bruteforce in weeks if you can do millions of attempts a second
I have never looked into the details of an OCR, but if it’s a classifier it should give the it’s confidence in being a 1 or L so you can start with the low confidence characters.
Asking the real questions