12 Pages
English
Gain access to the library to view online
Learn more

Discriminative Actions for Recognising Events

Gain access to the library to view online
Learn more
12 Pages
English

Description

Niveau: Supérieur, Doctorat, Bac+8
Discriminative Actions for Recognising Events Karteek Alahari? and C. V. Jawahar Centre for Visual Information Technology, International Institute of Information Technology, Hyderabad 500032, INDIA. Abstract. This paper presents an approach to identify the importance of different parts of a video sequence from the recognition point of view. It builds on the observations that: (1) events consist of more fundamental (or atomic) units, and (2) a discriminant-based approach is more appro- priate for the recognition task, when compared to the standard modelling techniques, such as PCA, HMM, etc. We introduce discriminative actions which describe the usefulness of the fundamental units in distinguishing between events. We first extract actions to capture the fine character- istics of individual parts in the events. These actions are modelled and their usefulness in discriminating between events is estimated as a score. The score highlights the important parts (or actions) of the event from the recognition aspect. Applicability of the approach on different classes of events is demonstrated along with a statistical analysis. 1 Introduction An event may be considered as a long-term temporally varying object, which typically spans over tens or hundreds of frames [1]. The problem of recognising events has received considerable research attention over the past few years [2–7].

  • class variance

  • events when compared

  • analysis such

  • event

  • many approaches

  • sample frames

  • recognition can

  • events


Subjects

Informations

Published by
Reads 15
Language English

Exrait

DiscriminativeActionsforRecognisingEventsKarteekAlahari?andC.V.JawaharCentreforVisualInformationTechnology,InternationalInstituteofInformationTechnology,Hyderabad500032,INDIA.jawahar@iiit.ac.inAbstract.Thispaperpresentsanapproachtoidentifytheimportanceofdifferentpartsofavideosequencefromtherecognitionpointofview.Itbuildsontheobservationsthat:(1)eventsconsistofmorefundamental(oratomic)units,and(2)adiscriminant-basedapproachismoreappro-priatefortherecognitiontask,whencomparedtothestandardmodellingtechniques,suchasPCA,HMM,etc.Weintroducediscriminativeactionswhichdescribetheusefulnessofthefundamentalunitsindistinguishingbetweenevents.Wefirstextractactionstocapturethefinecharacter-isticsofindividualpartsintheevents.Theseactionsaremodelledandtheirusefulnessindiscriminatingbetweeneventsisestimatedasascore.Thescorehighlightstheimportantparts(oractions)oftheeventfromtherecognitionaspect.Applicabilityoftheapproachondifferentclassesofeventsisdemonstratedalongwithastatisticalanalysis.1IntroductionAneventmaybeconsideredasalong-termtemporallyvaryingobject,whichtypicallyspansovertensorhundredsofframes[1].Theproblemofrecognisingeventshasreceivedconsiderableresearchattentionoverthepastfewyears[2–7].Ithasgainedimportancebecauseofitsimmediateapplicabilitytosurveillance,gesturerecognition,signlanguagerecognition,HumanComputerInteraction,etc.[4,8,9].Manyapproacheshavebeenproposedinthepasttorecogniseevents.Earlymethodstypicallyemployed2Dor3Dtrackingtotemporallyisolatetheobjectperformingtheevent.Subsequenttotracking,theeventisrecognisedbyextractinghigher-orderimagefeatures[6,9].Anexcellentreviewofsuchclassi-calapproachesforeventrecognitioncanbefoundin[2].Owingtotheinherentdynamisminevents,HiddenMarkovModels(HMMs)[10]andFiniteStateMa-chines[5]havebeenpopulartoaddresstheeventrecognitionproblem.Further-more,modelssuchasHMMsprovideelegantwaystoincorporatethevariabilityinalargecollectionofeventdata.Anothersignificantdirectionineventanalysisresearchistoextractstaticimagefeaturesfromdynamicevents[8,11,12].BobickandDavis[11]introducedMotionHistoryandMotionEnergyImages,whichrepresenttherecencyandspatialdensityofmotionrespectively.Insomesensetheirapproachreducesthedimensionalityoftheeventrecognitionproblemfroma3Dspatiotemporalspace?CurrentlyatOxfordBrookesUniversity,UK.