Notice Board :

Call for Paper
Vol. 10 Issue 9

Submission Start Date:
September 1, 2024

Acceptence Notification Start:
September 10, 2024

Submission End:
September 15, 2024

Final MenuScript Due:
September 25, 2024

Publication Date:
September 30, 2024


                         Notice Board: Call for PaperVol. 10 Issue 9      Submission Start Date: September 1, 2024      Acceptence Notification Start: September 10, 2024      Submission End: September 15, 2024      Final MenuScript Due: September 25, 2024      Publication Date: September 30, 2024




Volume II Issue I

Author Name
Neha S Patil, Prof Chhaya Nayak
Year Of Publication
2016
Volume and Issue
Volume 2 Issue 1
Abstract
To locate the suitable number of bunches and to apportion the archives is urgent in report grouping. In this paper we will concentrate on different bunching strategies and our proposed framework is to find the group structure without giving the aggregate number of groups as information. Report elements or even we can say that the different characteristics will be with no human obstruction isolated into two gatherings, specifically, discriminative words and nondiscriminative words, and contribute diversely to record grouping. There is variational surmising calculation in which we derive the archive accumulation structure and words in the meantime parcel of report. our proposed approach for the semisupervised report bunching. Semi-administered grouping lies between both programmed order and auto-association. Here the manager need not indicates an arrangement of classes, but rather just to give an arrangement of writings gathered by the criteria to be utilized to produce Clusters.
PaperID
2016/IJRRETAS/2/2016/1611

Author Name
Rishikesh S Patil, Prof Chhaya Nayak
Year Of Publication
2016
Volume and Issue
Volume 2 Issue 1
Abstract
Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the visual and aural channels: the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we apply video content analysis to detect slides and optical character recognition to obtain their text. We extract textual metadata by applying video Optical Character Recognition (OCR) technology on key-frames and Automatic Speech Recognition (ASR) on lecture audio tracks. The OCR and ASR transcript as well as detected slide text line types are adopted for keyword extraction, by which both video- and segment-level keywords are extracted for content-based video browsing and search.
PaperID
2016/IJRRETAS/2/2016/1612