您的当前位置:首页正文

SURF目标识别

2020-04-05 来源:钮旅网
InteractiveMuseumGuide:FastandRobust

RecognitionofMuseumObjects

HerbertBay,BeatFaselandLucVanGool

ComputerVisionLaboratory(BIWI),ETHZurich

Sternwartstr.7,8092Zurich,Switzerland{bay,bfasel,vangool}@vision.ee.ethz.ch

Abstract.Inthispaper,wedescribetheapplicationofthenovelSURF(SpeededUpRobustFeatures)algorithm[1]fortherecognitionofobjectsofart.Forthispurpose,wedevelopedaprototypeofamobileinteractivemuseumguideconsistingofatabletPCthatfeaturesatouchscreenandawebcam.Thisguiderecognisesobjectsinmuseumsbasedonimagestakenbythevisitor.Usingdifferentimagesetsofrealmuseumobjects,wedemonstratethatboththeobjectrecognitionperformanceaswellasthespeedoftheSURFalgorithmsurpassestheresultsobtainedwithSIFT,itsmaincontender.

1Introduction

Manymuseumsstillpresenttheirexhibitsinaratherpassiveandnon-engagingway.Thevisitorhastosearchthroughabookletinordertofinddescriptionsabouttheobjectsondisplay.However,lookingforinformationinthiswayisaquitetediousprocedure.Moreover,theinformationfounddoesnotalwaysmeetthevisitor’sspecificinterests.Onepossibilityofmakingexhibitionsmoreat-tractivetothevisitoristoimprovetheinteractionbetweenthevisitorandtheobjectsofinterestbymeansofaguide.Inthispaper,wepresentaninteractivemuseumguidethatisabletoautomaticallyfindandinstantaneouslyretrieveinformationabouttheobjectsofinterestusingastandardtabletPC.Undoubt-edly,technologicaldevelopmentswillleadtolessheavyanddownsizedsolutionsinthenearfuture.Thefocusofthispaperisonthevisioncomponentusedtorecognisetheobjects.1.1

RelatedWork

Recently,severalapproacheshavebeenproposedthatallowvisitorstointer-actviaanautomaticmuseumguide.Kusunokietal.[2]proposedasystemforchildrenthatusesasensingboard,whichcanrapidlyrecognisetypeandloca-tionsofmultipleobjects.Itcreatesanimmersingenvironmentbygivingaudio-visualfeedbacktothechildren.Otherapproachesincluderobotsthatguideusersthroughmuseums[3,4].However,suchrobotsaredifficulttoadapttodifferentenvironments,andtheyarenotappropriateforindividualuse.Aninteresting

approachusinghand-helddevices,likemobilephones,wasproposedby[5],buttheirrecognitiontechniqueseemsnottobeveryrobusttoviewingangleorlightingchanges.

Variousobjectrecognitionmethodshavebeeninvestigatedinthelasttwodecades.Morerecently,SIFT[6]anditsvariantssuchasPCA-SIFT[7]andGLOH[8]havebeensuccessfullyappliedformanyimagematchingapplications.Inthispaper,weshowthatthenewSURF(SpeededUpRobustFeatures)al-gorithm[1]surpassesSIFTinbothspeedandrecognitionaccuracy.1.2

InteractiveMuseumGuide

Theproposedinteractive,image-basedmuseumguideisinvarianttochangesinlighting,translation,scale,rotationandviewpointvariations.Ourobjectrecogni-tionsystemwasimplementedonaTabletPCusingaconventionalUSBwebcamforimageacquisition,seeFigure1.Thishand-helddeviceallowsthevisitortosimplytakeapictureofanobjectofinterestfromanypositionandisprovided,almostimmediately,withadetaileddescriptionofthelatter.

Fig.1.TabletPCwiththeUSBwebcamfixedonthescreen.Theinterfaceoftheobjectrecognitionsoftwareisoperatedviaatouchscreen.

Anearlyprototypeofthismuseumguidewasshowntothepublicduringthe150yearsanniversarycelebrationoftheFederalInstituteofTechnology(ETH)inZurich,Switzerland[9].Thedescriptionsoftherecognisedobjectsofartare

readtothevisitorsbyasyntheticcomputervoice.Thisenhancestheconvenienceoftheguideasthevisitorscanfocusontheobjectsofinterestinsteadofreadingtheobjectdescriptionsonthescreenoftheguide.

Inordertodemonstratetherecognitioncapabilitiesofourlatestimplemen-tation,wecreatedadatabasewithobjectsondisplayintheLandesmuseum.Asampleimageofeachofthe20chosenobjectsisshowninFigure2.

Fig.2.Sampleimagesofthe20chosenartobjectsfromtheLandesmuseum.

Theremainderofthispaperisorganisedasfollows.First,weintroduceourobjectrecognitionsystemindetail(Section2).Then,wepresentanddiscussresultsobtainedforamulti-classtask(Section3),andfinallyconcludewithanoveralldiscussionandsomefinalremarks(Section4).

2ObjectRecognitionSystem

Wedevelopedanobjectrecognitionsystemthatisbasedoninterestpointcorre-spondencesbetweenindividualimagepairs.Inputimages,takenbytheuser,arecomparedtoallmodelimagesinthedatabase.Thisisdonebymatchingtheirrespectiveinterestpoints.Themodelimagewiththehighestnumberofmatcheswithrespecttotheinputimageischosenastheonewhichrepresentstheobjectthevisitorislookingfor.

Fig.3.Lefttoright:the(discretisedandcropped)Gaussiansecondorderpartialderivativesiny-directionandxy-direction,andourapproximationsthereofusingboxfilters.Thegreyregionsareequaltozero.

Furthermore,weproposeanewobjectidentificationstrategybasedonthemeanEuclideandistancebetweenallmatchingpairs.Thelatterprovedtoyieldbetterresultsthantheaforementionedtraditionalapproach.

Inthefollowingsub-sectionsweshortlydescribetheSURFalgorithm.Then,wepresentthenewobjectselectionstrategy.2.1

FastInterestPointDetection

TheSURFfeaturedetectorisbasedontheHessianmatrix.Givenapointx=(x,y)󰀁inanimageI,theHessianmatrixH(x,σ)inxatscaleσisdefinedasfollows󰀇󰀁

Lxx(x,σ)Lxy(x,σ)

H(x,σ)=,(1)

Lxy(x,σ)Lyy(x,σ)whereLxx(x,σ)istheconvolutionoftheGaussiansecondorderderivative∂2

∂x2g(σ)withtheimageIinpointx,andsimilarlyforLxy(x,σ)andLyy(x,σ).IncontrasttoSIFT,whichapproximatesLaplacianofGaussian(LoG)withDifferenceofGaussians(DoG),SURFapproximatessecondorderGaussianderiva-tiveswithboxfilters,seeFigure3.Imageconvolutionswiththeseboxfilterscanbecomputedrapidlybyusingintegralimagesasdefinedin[10].TheentryofanintegralimageIΣ(x)atlocationx=(x,y)󰀁representsthesumofallpixelsinthebaseimageIofarectangularregionformedbytheoriginandx.

IΣ(x)=

j≤yi≤x󰀅󰀅i=0j=0

I(i,j)(2)

Oncewehavecomputedtheintegralimage,itisstraitforwardtocalculatethe

sumoftheintensitiesofpixelsoveranyupright,rectangulararea.

Thelocationandscaleofinterestpointsareselectedbyrelyingonthedeter-minantoftheHessian.Hereby,theapproximationofthesecondorderderivativesisdenotedasDxx,Dyy,andDxy.Bychoosingtheweightsfortheboxfiltersadequately,wefindasapproximationfortheHessian’sdeterminant

det(Happrox)=DxxDyy−(0.9Dxy)2.

(3)

Fig.4.Left:DetectedinterestpointsforaSunflowerfield.ThiskindofscenesshowclearlythenatureofthefeaturesobtainedfromHessian-baseddetectors.Middle:HaarwaveletfiltersusedwithSURF.Right:DetailoftheGraffitisceneshowingthesizeofthedescriptorwindowatdifferentscales.

Formoredetails,see[1].Interestpointsarelocalisedinscaleandimagespacebyapplyinganon-maximumsuppressionina3×3×3neighbourhood.Finally,thefoundmaximaofthedeterminantoftheapproximatedHessianmatrixareinterpolatedinscaleandimagespace.2.2

InterestPointDescriptor

Inafirststep,SURFconstructsacircularregionaroundthedetectedinterestpointsinordertoassignauniqueorientationtotheformerandthusgainin-variancetoimagerotations.TheorientationiscomputedusingHaarwaveletresponsesinbothxandydirectionasshowninthemiddleofFigure4.TheHaarwaveletscanbeeasilycomputedviaintegralimages,similartotheGaus-siansecondorderapproximatedboxfilters.OncetheHaarwaveletresponsesarecomputed,theyareweightedwithaGaussianwithσ=2.5scentredattheinterestpoints.Inanextstepthedominantorientationisestimatedbysummingthehorizontalandverticalwaveletresponseswithinarotatingwedge,coveringanangleofπ3inthewaveletresponsespace.Theresultingmaximumisthenchosentodescribetheorientationoftheinterestpointdescriptor.

Inasecondstep,theSURFdescriptorsareconstructedbyextractingsquareregionsaroundtheinterestpoints.Theseareorientedinthedirectionsassignedinthepreviousstep.SomeexamplewindowsareshownontherighthandsideofFigure4.Thewindowsaresplitupin4×4sub-regionsinordertoretainsomespatialinformation.Ineachsub-region,Haarwaveletsareextractedatregularlyspacedsamplepoints.Inordertoincreaserobustnesstogeometricdeformationsandlocalisationerrors,theresponsesoftheHaarwaveletsareweightedwithaGaussian,centredattheinterestpoint.Finally,thewaveletresponsesinhorizontaldxandverticaldirectionsdyaresummedupovereachsub-region.Furthermore,theabsolutevalues|dx|and|dy|aresummedinordertoobtaininformationaboutthepolarityoftheimageintensitychanges.Hence,

Fig.5.Thedescriptorentriesofasub-regionrepresentthenatureoftheunderlyingintensitypattern.Left:Incaseofahomogeneousregion,allvaluesParerelativelylow.Middle:Inpresenceoffrequenciesinx-direction,thevalueof|dx|ishigh,butallothersremainPPlow.Iftheintensityisgraduallyincreasinginx-direction,bothvaluesdxand|dx|arehigh.

theunderlyingintensitypatternofeachsub-regionisdescribedbyavector

󰀅󰀅󰀅󰀅v=(dx,dy,|dx|,|dy|).(4)Theresultingdescriptorvectorforall4×4sub-regionsisoflength64.SeeFig-ure5foranillustrationoftheSURFdescriptorforthreedifferentimageintensity

patterns.NoticethattheHaarwaveletsareinvarianttoilluminationbiasandadditionalinvariancetocontrastisachievedbynormalisingthedescriptorvectortounitlength.

Rotation-invariantobjectrecognitionisnotalwaysnecessary.Therefore,ascale-invariant-onlyversionoftheSURFdescriptorwasintroducedin[1]anddenoted’UprightSURF’(U-SURF).Indeed,inthescenarioofahand-heldin-teractivemuseumguide,wherethemuseumvisitorholdsthedeviceinbothhands,itissavetoassumethatimagesofobjectsaremostlytakeninanuprightposition.Therefore,U-SURFcanbeusedasanalternativedescriptorwiththebenefitofbothincreasedspeedanddiscriminationpower.U-SURFisfasterthanSURFasitdoesnotperformtheorientationrelatedcomputations.

Inthispaper,wecomparetheresultsforSURF,referredtoasSURF-64,andsomealternativeversion(SURF-36,SURF-128)aswellasfortheuprightcounterparts(U-SURF-64,U-SURF-36,U-SURF-128)thatarenotinvarianttoimagerotation.ThedifferencebetweenSURFanditsvariantsliesinthedimen-sionofthedescriptor.SURF-36extractsthedescriptorvectorfromequation(4)foronly3×3subregions.SURF-128isanextendedversionofSURFthattreatssumsofdxand|dx|separatelyfordy<0anddy≥0.Similarly,thesumsofdyand|dy|aresplitupaccordingtothesignofdx.Thisdoublesthenumberoffeatures(128insteadof64)resultinginamoredistinctivedescriptor,whichisnotmuchslowertocompute,butslowertomatchduetoitshigherdimensional-ity(butstillfastertomatchthanSIFT).ThefastmatchingspeedforallSURFversionsisachievedbyasinglestepaddedtotheindexingbasedonthesignof

theLaplacian(traceoftheHessianmatrix)oftheinterestpoint.ThesignoftheLaplaciandistinguishesbrightblobsonadarkbackgroundfromtheinversesituation.’Bright’interestpointsareonlymatchedagainstother‘bright’inter-estpointsandsimilarlyforthe‘dark’ones.Thisminimalinformationpermitstoalmostdoublethematchingspeedanditcomesatnocomputationalcosts,asithasalreadybeencomputedintheinterestpointdetectionstep.2.3

ObjectRecognition

Traditionalobjectrecognitionmethodsrelyonmodelimages,eachrepresentingasingleobjectinisolation.Inpractice,however,thenecessarysegmentationisnotalwaysaffordableorevenpossible.Forourobjectrecognitionapplication,weusemodelimageswheretheobjectsarenotseparatedfromthebackground.Thus,thebackgroundalsoprovidesfeaturesforthematchingtask.Inanygiventestimage,onlyoneobjectorobjectgroupthatbelongstogetherisassumed.Hence,objectrecognitionisachievedbyimagematching.Extractedinterestpointsoftheinputimagearecomparedtotheinterestpointsofallmodelimages.InordertocreateasetofinterestpointcorrespondencesM,weusedthenearestneighbourratiomatchingstrategy[11,6,12].ThisstatesthatamatchingpairisdetectedifitsEuclideandistanceindescriptorspaceiscloserthan0.8timesthedistancetothesecondnearestneighbour.

TheselectedobjectistheonefiguringinthemodelimagewiththehighestrecognitionscoreSR.ThisscoreistraditionallythenumberoftotalmatchesinM.However,thepresenceofmismatchesoftenleadtofalsedetections.Thiscanbeavoidedwiththehelpofthefollowingnewalternativefortheestimationoftherecognitionscore.Hereby,wecalculatethemeanEuclideandistancetotheindividualnearestneighboursforeachimagepair.Thisvalueistypicallysmallerforcorrespondingimagepairsthanfornon-correspondingones,anditdoesnotdependonthenumberofextractedfeaturesintheindividualimages.Hence,wemaximisethefollowingrecognitionscore

󰀂󰀃

Ni

SR=argmax󰀆󰀄(5)

Ni2i

j=1dijandchosetheobjectforwhichthemeandistanceofitsmatchesissmallest.Nidenotesthenumberofmatchesinimagei.Furthermore,dijistheEuclideandis-tanceinthedescriptorspacebetweenamatchingpairofkeypoints.Thematchingcriteriaisthatthisdistanceiscloserthan0.8timesthedistancetothesecondnearestneighbour.

3ExperimentalResults

Foreachofthe20objectsofartinourdatabase,imagesofsize320×240weretakenfromdifferentviewingangles.Thisallowsforsomedegreeofview-pointindependence.Thedatabaseincludesatotalof205modelimages.Theseare

groupedintwomodelsets(M1andM2)with105and100images,respectively.Thereasonsforthechoiceoftwodifferentmodelsetsaretheuseoftwodifferentcamerasandthepresenceofdifferentlightingconditions.Moreover,lessmodelimagesforagivenobjectrepresentsamorechallengingsituationforobjectrecognition.

Forsimilarreasons,webuilt3differenttestsets(T1-T3)withatotalof116images(42,34,40).Eachsetcontainsoneormoreimagesofallobjects.Theseobjectsofartaremadeofdifferentmaterials,havedifferentshapesandencompasswoodenstatues,paintings,metalandstoneitemsaswellasobjectsenclosedinglasscabinetswhichproduceinterferingreflections.Theimagesweretakenfromsubstantiallydifferentviewpointsunderarbitraryscale,rotationandvaryinglightingconditions.

Thetestimagesetswereevaluatedoneachofthemodelsets.TheobtainedrecognitionresultsareshowninTable1and2.Listedaretheresultsforthe

Method

TimeRecognitionRateTotalD(s)+M(s)T1/M1T2/M1T3/M1T1/M2T2/M2T3/M2(%)

SURF-3619+2681798571947881.0SURF-6419+38887990691007883.6SURF-12819+5981919071977583.5U-SURF-3616+2674799074917580.2U-SURF-6416+3886858874947883.8U-SURF-12816+5983949576948086.5SIFT136+8379889076917582.7Table1.ImagematchingresultsfordifferentSURFversionsandSIFT.ListedareboththetotaldetectionD(s)andmatchingtimeM(s)forall3testsetscombinedwiththemodelsets.

standardrecognitionscorebasedonthemaximumnumberofmatches(Table1)andthemeanEuclideandistance(Table2)asdescribedinEquation(5).ItcanbeseenthatmostversionsofSURFoutperformSIFTformosttestsetswhilebeingsubstantiallyfasterforbothcomputationandmatching.Therecognitionratesforthenewrecognitionscore,basedonthemeanEuclideandistance,increaseupto10%.NotethatboththeSIFTandSURFdescriptorswereappliedonthesameinterestpointsforallexperiments.ThereportedcomputationtimeswereachievedonaLinuxTabletPCequippedwithanIntelPentiumMprocessorrunningat1.7GHz.

Figures6and7showcaseswhereSURFandSIFTfailtorecognisethesameforegroundobjects.OnthebottomofFigure6,twoimagepairsaredisplayedwheretheforegroundobjectisnotcorrectlyrecognisedbytheSURFalgorithm.Notehowever,thatacorrectmatchwasfoundforvalidobjectsthatarevisibleinthebackground.Incontrast,SIFTdidnotfindenoughmatchestoallowfor

MethodTimeRecognitionRateTotalD(s)+M(s)T1/M1T2/M1T3/M1T1/M2T2/M2T3/M2(%)

SURF-3619+2686889076977384.5SURF-6419+3883918883978387.1SURF-12819+59888593791008588.0U-SURF-3616+268610098811008591.1U-SURF-6416+38869493811008589.4U-SURF-12816+59869495861009091.5SIFT136+83839110076948086.9Table2.ImagematchingresultsfordifferentSURFversionsandSIFTwiththenewmatchingstrategy.ListedareboththetotaldetectionD(s)andmatchingtimeM(s)foralltestsetscombinedwiththemodelsets.

acorrectrecognitionofmodelobjectssituatedeitherintheforegroundorthebackgroundofthedepictedtestimages.

Figures8and9showcases,whereeitherSIFTorSURFfailtorecognisethecorrectforegroundobject.NotethatthegobletshowninthetoprowofFigure8wastwicenotcorrectlyrecognisedbySIFT.Notasinglematchwasfoundontheobjectitself,butmanyontheenclosingshowcase.However,manymodelobjectscontainedinthedatabaseareenclosedinshowcasesandcanthusleadtofalsematcheswhenitcomestotherecognitionoftheforegroundobjectofinterest.Figure9(left)showsacasewhereonlySURFproducesafalserecognition.Noticethatmanyfalsematcheswerefoundbetweentheobjectofinterestandabackgroundobjectthatisnotpartofthemodeldatabase.Hence,testobjectscanbefalselyrecognisedduetomodelimagesthatcontainsimilararbitrarybackgroundobjectsthatarenotpartoftheobjectsofinterest.

Finally,Figure9(right)showsasuccessfullyrecognisedobject.Inthatspe-cificcase,thebackgroundinformationwashelpfulfortherecognitionoftheobject.

4DiscussionandConclusion

Inthispaper,wedescribedthefunctionalityofaninteractivemuseumguide,whichallowstorobustlyrecognisemuseumexhibitsunderdifficultenvironmentalconditions.Ourguideisrobusttoscale(SURF,U-SURF)androtation(SURF).Changesoftheviewinganglearecoveredbytheoverallrobustnessofthede-scriptoruptosomeextent.Thismuseumguideisrunningonstandardlow-costhardware.4.1

ObjectRecognition

WiththecomputationalefficiencyofSURF,objectrecognitioncanbeperformedinstantaneouslyforthe20objectsonwhichwetestedthedifferentschemes.The

imagesweretakenwithalow-qualitywebcam.However,thisaffectedtheresultsonlyuptoalimitedextent.Notethatincontrasttotheapproachdescribedin[5],allthetestedschemesdonotusecolourinformationfortheobjectrecognitiontask.Thisisoneofthereasonsfortheabove-mentionedrecognitionrobustnessundervariouslightingconditions.Weexperimentallyverifiedthatilluminationvariations,causedbyartificialandnaturallighting,leadtolowrecognitionre-sultswhencolourwasusedastheonlysourceofinformation.

Thefactthatourmodelimagesincludebackgroundinformationcanbehelp-fulfortherecognitionofobjects.Especiallyincaseswheretheobjectsofinterestaretoosimilarordonotprovideenoughrobustanddiscriminantfeatures,back-groundinformationmayallowtorecognisetheobjectsuccessfully.However,ifadominatingbackgroundobjectispresentinthetestimage,ourrecognitionmethodsfindmorematchesontheobjectinthebackgroundratherthanontheoneintheforegroundandthisleadstoafalserecognition,seeFigure6.4.2

AutomaticRoomDetection

Withalargernumberofobjectstoberecognised,thematchingaccuracyandspeeddecrease.Also,additionalbackgroundcluttercanenterthedatabasethatmaygeneratemismatchesandthusleadtofalsedetections.However,inatypicalmuseumtheproposedinteractivemuseumguidehastobeabletocopewithten-thousandsofobjectswithpossiblysimilarappearance.Asolutiontothisproblemwouldbetodeterminethevisitor’slocationbyaddingaBluetoothreceivertotheinteractivemuseumguidethatcanpickupsignalsemittedfromsendersplacedindifferentexhibitionroomsofthemuseum[9].Thisinformationcanthenbeusedtoreducethesearchspacefortheextractionofrelevantobjects.Hence,therecognitionaccuracyisincreasedandthesearchtimereduced.Moreover,thisinformationcanbeusedtoindicatetheuser’scurrentlocationinthemuseum.

5Acknowledgements

TheauthorsgladlyacknowledgethefinancialsupportprovidedbytheToyotacorporation.WealsogratefullyacknowledgethesupportbytheSwissNationalMuseuminZurich,Switzerland.

References

1.Bay,H.,Tuytelaars,T.,VanGool,L.:SURF:Speededuprobustfeatures.In:ECCV.(2006)

2.Kusunoki,F.,Sugimoto,M.,Hashizume,H.:Towardaninteractivemuseumguidewithsensingandwirelessnetworktechnologies.In:WMTE2002,Vaxjo,Sweden.(2002)99–102

3.Burgard,W.,Cremers,A.,Fox,D.,H¨ahnel,D.,Lakemeyer,G.,Schulz,D.,Steiner,W.,Thrun,S.:Theinteractivemuseumtour-guiderobot.In:FifteenthNationalConferenceonArtificialIntelligence(AAAI-98).(1998)

4.Thrun,S.,Beetz,M.,Bennewitz,M.,Burgard,W.,Cremers,A.,Dellaert,F.,Fox,D.,H¨ahnel,D.,Rosenberg,C.,Roy,N.,Schulte,J.,Schulz,D.:Probabilisticalgorithmsandtheinteractivemuseumtour-guiderobotminerva.InternationalJournalofRoboticsResearch19(11)(2000)972–9995.F¨ockler,P.,Zeidler,T.,Bimber,O.:Phoneguide:Museumguidancesupportedbyon-deviceobjectrecognitiononmobilephones.ResearchReport54.7454.72,Bauhaus-UniversityWeimar,MediaFaculty,Dept.AugmentedReality(2005)6.Lowe,D.G.:Distinctiveimagefeaturesfromscale-invariantkeypoints,cascadefilteringapproach.InternationalJournalofComputerVision60(2)(2004)91–1107.Ke,Y.,Sukthankar,R.:PCA-SIFT:Amoredistinctiverepresentationforlocalimagedescriptors.In:ProceedingsofIEEEConferenceonComputerVisionandPatternRecognition.(2004)506–513

8.Mikolajczyk,K.,Schmid,C.:Aperformanceevaluationoflocaldescriptors.PAMI27(10)(2005)1615–1630

9.Bay,H.,Fasel,B.,VanGool,L.:Interactivemuseumguide.In:TheSeventhIn-ternationalConferenceonUbiquitousComputingUBICOMP,WorkshoponSmartEnvironmentsandTheirApplicationstoCulturalHeritage.(2005)

10.Viola,P.,Jones,M.:Rapidobjectdetectionusingaboostedcascadeofsimple

features.In:ComputerVisionandPatternRecognition.(2001)

11.Baumberg,A.:Reliablefeaturematchingacrosswidelyseparatedviews.In:Com-puterVisionandPatternRecognition.(2000)774–781

12.Mikolajczyk,K.,Schmid,C.:Aperformanceevaluationoflocaldescriptors.In:

ComputerVisionandPatternRecognition.Volume2.(2003)257–263

Fig.6.Commonimagematchingmistakes.BothSIFT(toprow)andSURF(bottomrow)failtorecognisethesametestobjects.Ineachofthefourtwo-imagecombinations,testimagesareshownonthetopandmatchedmodelimagesonthebottom.

Fig.7.Commonimagematchingmistakes.BothSIFT(toprow)andSURF(bottomrow)failtorecognisethesametestobject.Ineachofthefourtwo-imagecombinations,testimagesareshownonthetopandmatchedmodelimagesonthebottom.

Fig.8.IndividualimagematchingmistakesproducedbySIFT.Ineachofthefourimagecombinations,testimagesareshowninthetoprowandthematchedmodelimageinthebottomrow.

Fig.9.IndividualimagematchingmistakeproducedbySURF(left)andasuccessfullyrecognisedobject(right).Thetestimageisshownonthetopandthematchedmodelimageonthebottom.

因篇幅问题不能全部显示,请点此查看更多更全内容