info@ijrretas.com
+91 77710 84928 Support
ijrretas LogoIJRRETAS
  • About
    • About The Journal
    • Aim & Scope
    • Privacy Statement
    • Journal Policies
    • Disclaimer
    • Abstracting and Indexing
    • FAQ
  • Current
  • Archive
  • For Author
    • Submit Paper Online
    • Article Processing Charges
    • Submission Guidelines
    • Manuscript Types
    • Download Article Template
    • Download Copyright Form
  • Editorial Board
    • Editorial Board
    • Editors Responsibilities
  • Conference
  • Contact
  • Pay Online
  • Submit Paper

Recent Papers

Dedicated to advancing knowledge through rigorous research and scholarly publication

  1. Home
  2. Recent Papers

Pacify based Video Retrieval System

Authors: MrRishikesh S Patil, Prof. Chhaya Nayak

Certificate: View Certificate

Abstract

Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the visual and aural channels: the presentation slides and lecturer\'s speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we apply video content analysis to detect slides and optical character recognition to obtain their text. We extract textual metadata by applying video Optical Character Recognition (OCR) technology on key-frames and Automatic Speech Recognition (ASR) on lecture audio tracks. The OCR and ASR transcript as well as detected slide text line types are adopted for keyword extraction, by which both video- and segment-level keywords are extracted for content-based video browsing and search. Index Terms—Lecture videos, automatic video indexing, content-based video search, lecture video archives

Introduction

Digital video has become a popular and Storage medium of exchange because of the rapid development in recording technology, video compression techniques improved and broadband networks in recent years [1].To arrive at identification accuracy that is acceptable for retrieval given the difficult conditions, the different parts of our ASR system must be made as robust as possible so that it is able to cope with those problems that typically emerge when technology is transferred from the lab and applied in a real life context.

Conclusion

The first conclusion is that the slide text and spoken text are not the same. Comparison of the ground truth and automatic transcripts reveal substantial differences in the content and volume of slide and spoken text. The overlap is limited even when controlling for recognition errors with manual transcripts. Issuing term queries that are common to the SLIDE and SPOKEN ground truth retrieve different videos among the results using both manual and automatic text search indexes. Secondly, both manually and automatically extracted slide text exhibit greater retrieval precision when compared to manually and automatically transcribe spoken text. We attribute this result to two causes. First, the usage of terms in slides is the product of a deliberate authoring process, while speech is often partially improvised. Less descriptive terms are more common in speech, and in turn more commonly shared with other videos spoken transcripts. This imprecision limits the discriminative power of spoken text for video retrieval. The second factor is the differing recognition error profiles of ASR and OCR. Errors are more frequent in OCR,but occur at the character level producing non-dictionary terms in the transcripts. These errors do not degrade text-based retrieval, since they do not appear as queries. Errors in ASR occur at the word level due to phonetic and out of vocabulary mismatch. The resulting inserted terms tend to be dictionary words that appear in both other video transcripts and search queries. Automated annotation for OCR and ASR results using Linked Open Data resources offers the opportunity to enhance the amount of linked educational resources significantly. Therefore more efficient search and recommendation method could be developed in lecture video archives.

Copyright

Copyright © 2025 MrRishikesh S Patil, Prof. Chhaya Nayak. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Download Paper

Paper Id: IJRRETAS7

Publish Date: 2016-02-01

ISSN: 2455-4723

Publisher Name: ijrretas

About ijrretas

ijrretas is a leading open-access, peer-reviewed journal dedicated to advancing research in applied sciences and engineering. We provide a global platform for researchers to disseminate innovative findings and technological breakthroughs.

ISSN
2455-4723
Established
2015

Quick Links

Home Submit Paper Author Guidelines Editorial Board Past Issues Topics
Fees Structure Scope & Topics Terms & Conditions Privacy Policy Refund and Cancellation Policy

Contact Us

Vidhya Innovative Technology 514, Pukhraj Corporate Navlakha, Indore (M.P) - India

info@ijrretas.com

+91 77710 84928

www.ijrretas.com

Indexed In
Google Scholar Crossref DOAJ ResearchGate CiteFactor
© 2026 ijrretas. All Rights Reserved.
Privacy Policy Terms of Service