Random Post: CHALICE: The Plan of Work
RSS .92| RSS 2.0| ATOM 0.3
  • Home
  • About
  • Team
  •  

    Quality of text correction analysis from CDDA

    The following post is by Elaine Yeates, project manager at the Centre for Data Digitisation and Analysis in Belfast. Elaine and her team have been responsible for taking scans of a selection of volumes of the English Place Name Survey and turning them into corrected OCR’d text, for later text mining to extract the data structures and republish them as Linked Data.

    “I’ve worked up some figures based on an average character count from Cheshire, Buckinghamshire, Cambridgeshire and Derbyshire.

    We had two levels of quality control:

    1st QA Spelling and Font:- On completion of the OCR process and based on 40 pages averaging 4000 characters per page the error rate was 346 character errors (average per page 8.65) = 0.22

    1st QA Unicode:- On completion of the OCR process and based on 40 pages averaging 4000 characters per page the error rate was 235 character errors (average per page 5.87)= 0.14.

    TOTAL Error Rate 0.36
    2nd QA – Encompasses all of 1st QA and based on 40 pages averaging 4000 characters per page the error rate was 18 character errors (average per page 0.45) = 0.01.

    Through the pilot we indentified that there are quite a few Unicodes unique to this material. CDDA developed an in-house online Unicode database for analysts, they can view, update the capture file and raise new codes when found. I think for a more substantial project we might direct our QA process through an online audit system, where we could identify issues with material, OCR of same, macro’s and the 1st and 2nd stages of quality control.

    We are pleased with these figures and it looks encouraging for a larger scaled project.”

    Elaine also wrote in response to some feedback on markup error rates from Claire Grover on behalf of the Language Technology Group:

    ‘Thanks for these. Our QA team our primarily looking for spelling errors, from your list the few issues seem to be bold, spaces and small caps.

    Of course when tagging, especially automated, you’re looking for certain patterns, however moving forward I feel this error rate is very encouraging and it helps our QA team to know what patterns might be searchable for future capture.

    Looking at your issues so far, on part Part IV (5 issues e-mailed) and a total word count of 132,357 (an error rate of 0.00003).”

    I am happy to have these numbers, as one can observe consistency of quality over iterations, as means are found to work with more volumes of EPNS.

    2 responses to “Quality of text correction analysis from CDDA”

    1. Can you post an example of the bits being OCR’d and where in the text the likely errors occur, it would be very interesting to see an example. My video didn’t turn out clear enough :( really fun to have the walk through for others to see.

    2. Yep, I’d be keen to see that as well. Sounds interesting. Adrian