刀刀网
您的当前位置:首页A Compact Feature Representation and Image Indexing in Content-Based Image Retrieval Abstra

A Compact Feature Representation and Image Indexing in Content-Based Image Retrieval Abstra

来源:刀刀网
ACompactFeatureRepresentationandImageIndexing

inContent-BasedImageRetrieval

G.DasandS.Ray

ClaytonSchoolofInformationTechnology

MonashUniversity,Australia

email:{Gita.Das,Sid.Ray}@csse.monash.edu.au

Abstract

ProperselectionoffeaturesandtheirrepresentationareessentialforagoodContent-BasedImageRetrieval(CBIR)system.Inthispaper,weproposeacompactfeaturerepresentationbasedontheelementsofColourCo-occurrenceMatrices(CCM)inHue,Saturation,Value(HSV)colourspace.Torepresenteachimageinthedatabase,weconstructedafeaturevectorconsideringalldiagonalelementsandSum-Average[1]ofallnon-diagonalelementsoftheCCM.Wedemonstratethatourfeaturerepresentationissuperiorintermsofsystemaccuracyandonlinecomputationtimewhichareattributedtodimensionreduction.Wepresentexperimentalresultswithadatabaseof2000imagesfrom10categories.Withourmethod,weachievedupto3%improvementinprecision.Withrelevancefeedback,weobtainedafurther12%improvementasopposedto9%withoriginalhigherdimension.

Keywords:compactfeaturevector,dimensionreduction,featurere-weighting,CBIR,relevance

feedback

1Introduction

Theselectionoffeaturese.g.colour,shape,colour-layoutetc.andtheirproperrepresentatione.g.colourhistogram,statisticalmomentsetc.areveryimportantforgoodsystemretrieval.Theconceptofco-occurrencematrixintexturehasbeenknownforlong[1],however,itsuseincolourhasbeenreportedveryrecently[2],[3],[4].ACCMrepre-sentshowthespatialcorrelationofcolourchangeswithdistancei.e.pixelpositions.In[3],aModi-fiedColourCo-occurrenceMatrix(MCCM)isusedwhereaCCMofHueissimplifiedtorepresentthenumberofcolourpairsbetweenadjacentpixels(4-neighbourhood).Theydidnotconsiderthead-verseeffectofignoringSaturationandValuecom-ponentsofcolour.Thediagonalelementsconveythecolourinformationoftheentireimagewhereasthenon-diagonalelementsrepresenttheshapein-formationinanindirectway.In[4],Ojalaet.aldemonstratedtheeffectofquantizationofH,S,Vlevelsonretrievalaccuracy.

Inthispaper,westudiedtheeffectofH-onlyandH,S,Vtogetheronthesystemperformance.Al-though,additionofSandVincreasesfeaturedi-mension,itprovidesextrainformationaboutim-agesandthusimprovesretrieval.Theuseofallmatrixelementsinthefeaturevectorasin[2],[3]and[4],willcontributetoonlinecomputationsignificantlyandthiscanbeworseinrealsituationswheredatabasesizeisverylarge.Also,anincreaseinfeaturedimensionessentiallymeansanexponen-

tialgrowthinthenumberoftrainingsamples.Thislimitation,calledtheCurseofDimensionality,isawellknownfactinPatternClassification[5],[6].Hence,dimensionreductionhasbecomeacriticalissueinfeaturerepresentationandimageindex-ingofCBIRsystems[7].Ifthereisnoinforma-tionlossduetodimensionreductionthensystemperformancewouldbethesameinbothspaces.However,thatisanidealsituation.WeobservedthatdiagonalelementsofCCMsaremuchmoreinnumber(about80%)comparedtothenon-diagonalelements(about20%).Thisobservationisinlinewiththatreportedin[3].Also,wehavenoticedthatmostofthenon-diagonalelementsarezero.Thusrepresentingallthenon-diagonalelementswithasingleSum-Average[1]value(fordetails,seesection2)attributestothefollowingbenefits:1.theSum-Averageofnon-diagonalelementswouldbelesssensitivetonoiseandthusenhanceretrievalperformance,2.thedimensionisreducedsignificantly,thusre-ducingonlinecomputationandretrievaltime,3.comparedtoothermethodsofdimensionreductione.g.PrincipalComponentAnalysis(PCA)[7],computingSum-Averageisverysimple.Weusedallthediagonalelementsinthefeaturevectorastheyarethemajorityandmanipulating

theminanywaywouldcontributetolossofinfor-mationcontentofimagessignificantly.Thus,weconstructedafeaturevectorwithalldiagonalele-mentsandSum-Averageofnon-diagonalelementsfromH,S,Vmatrices.Forrestofthepaper,werefertheoriginalfeaturespaceasoriginaldimen-sionandouroneasreduceddimension.Withourmethod,weachievedimprovedperformanceandfasterretrieval.

InCBIRsystems,relevancefeedbackisfoundtobeveryusefulinreducingsemanticgap,whichisabottlenecktoachieveasuccessfulCBIRsys-tem[8],[9].Foragivenquery,thesystemfirstretrievesasetofrankedimagesaccordingtoasimilaritymetric,whichrepresentsthedistancebe-tweenthefeaturevectorsofthequeryimageandthedatabaseimages.Thentheuserisaskedtoselecttheimagesthatarerelevantorirrelevanttohis/herquery.Thesystemusesthisinformationtoimproveretrievalresults.Weusedafeaturere-weightingapproachtoimplementrelevancefeed-back,whereweightofeachcomponentinthesimi-laritymeasureisupdatedaccordingtoinformationextractedfromthepreviousiteration.Thesmallnumberoftrainingsamplescancauseproblemes-peciallyinhighdimension.Wehaveshownthatprecisionisbetterwithlowerdimension.

Sections2and3describetheproposedapproachandexperimentalresultsrespectively.Section4containsconclusionandfuturedirections.

2Methodology

2.1

FeatureRepresentation

Weusedthefollowingnomenclature:N:NumberofimagesinthedatabaseNr:Scopei.e.thenumberofretrievedimagesQ,I:QueryimageandDatabaseimagerespectivelyM:NumberofcomponentsinthefeaturevectorL:NumberofquantizationlevelsinH,S,Vk:Iterationnumberinrelevancefeedback

wki:weightfactorforithfeaturecomponentinkthiteration

LetPbetheL×Lco-occurrencematrixwhoseelementpxyindicatesthenumberoftimesapixelwithcolourlevelxoccurs,atadistanced,relativetopixelswithcolourlevely.ToincorporatetheSum-Averageasdescribedin[1]intoourcase,weintroducedthefollowingformula:

L󰀁−1Sumndiag=

󰀁

L(x+y)pxy(1)

x=1y=x+1

whereSumndiagareSum-Averageofnon-diagonalelementsofP.WechoseHSVcolour

modelasitisknowntobeperceptuallyuniform[8].WetriedtomakethespatialcorrelationmoresensitivetoHueandlesssensitivetoValueandSaturation.WeexperimentedwithdifferentlevelsofquantizationandfoundH,S,V=16,3,3tobeagoodchoice.AsweincreasedthelevelofquantizationofH,S,V,weobtainedbetterprecisionbutafteracertainrangethechangeinprecisionwasinsignificant.Sowedecidedtouse16,3,3tokeepcomputationlowwhilestillmaintaininggoodprecision.Wechoseco-occurrencedistanced=3andusedpixelpairsinbothverticalandhorizontaldirections.Thusweobtainedsymmetricmatricesandneededonlyupperdiagonalelementstoconsider.Fora16x16matrix,thenumberofdiagonalelementsis16andthenumberofnon-diagonalelementsis120.Fora3x3matrix,thisnumberis3forbothdiagonalandnon-diagonalelements.Inourmethod,werepresentedallnon-diagonalelementsbyasinglevalue.Thus,forH,S,V=16,3,3thefeaturedimensionis148anditisreducedto25inourapproach.

Asdifferentfeaturecomponentshavedifferentrange(orvalues),wenormalizedthemsothattheyliewithin[0,1]andeachcomponentcontributesequallyinthesimilaritymeasure.Theith

normalizedfeaturecomponent,f󰀁

iisgivenby,

f󰀁

fi,org−µi

i=

3σ,i=1,2,....M

(2)

i

wherefi,orgistheoriginalithfeaturecomponent,µiisthemeanandσiisthestandarddeviationoffi,org.ThesevaluesarecalculatedovertheentiredatabaseofNsamples.UndertheassumptionofGaussiandistributionofvalues,theterm3σiensuresthatatleast99%ofthesamplesarewithintherange[−1,1].Anyvaluethatis<−1issetto−1and>1issetto1.Inordertomapthenormalizedvaluesfrom[−1,1]to[0,1],weusedthefollowingformula:

f󰀁

f=i

+1i2

(3)

2.2ImageIndexing

TofindthesimilaritybetweenIandQ,weusedaweightedMinkowskidistancemeasure,acom-monlyusedmetricinCBIR,

D(I,Q)=

󰀁Mwi∗|fiI−fiQ|(4)

i=1

wherewiistheweightofithfeaturecomponent.

Whenthereisnorelevancefeedback,equalweightvaluesareusedforeachfeaturecomponent.Whenrelevancefeedbackisusedtheweightscanbeup-dated(thisisthetopicofanotherpaperbytheauthorstobepublishedelsewhere)accordingtothefollowingformula:

wki

+1=

σkN

r,iσk(5)

rel,i

Here,σk

N,i

isthestandarddeviationofithrfeaturecomponentoverNrretrievedimagesandσk

rel,ithatovertherelevantretrievedimagesinkthiteration.Pleasenotethatin[10]and[11],theyhaveusedsimilarweightfactors,however,therethenumera-torrepresentedstandarddeviationovertheentiredatabase.Agoodfeaturecomponentshouldhavealargevariationovertheentiredatabaseimagesandasmallvariationovertherelevantimages.Asthefeaturevaluesarenormalized,thestandarddevi-ationovertheentiredatabaseremainsunchangedwithiterationandthusdoesnotprovideanyextrainformation.However,afeaturecomponentthathassmallvariationoverrelevantimagesandalargevariationovertheretrievedimagesisconsideredtobegoodandshouldbegivenmoreweight.Also,witheachiterationanewsetofimagesislikelyto

beretrievedandanewσk

weoptedtouseeqn(5)inNr,i

obtained.Thisiswhyweightcalculation.Inafewcases,wehadonlyoneimageretrievedmeaningσkrel,iequalstozero.Toavoidthissituationofinfiniteweightvalue,weusedamodifiedformulaasbelow:

k

wk+1=1+σN

r,ii1+σk(6)rel,i

3ExperimentalResultsandAnalysis

Thedatasetcomprised2000imagesfrom10dif-ferentcategories,namely,Flowers,FruitsandVeg-etables,Nature,Leaves,Ships,Faces,Fishes,Cars,Animals,Aeroplanes.Eachcategorycontains200images.Allimageshavesamepixelsizeof256×256.Weusedprecisionasameasureofsystemper-formancewhichisgivenbythefollowingformula:precision=

No.ofrelevantretrievedimages

No.ofretrievedimages

(7)

Weusedall2000imagesasqueryimagesandcal-culatedprecisionafteraveragingoverallqueryim-

ages.Thiswayitensuresthetruerepresentationofsystemperformance[12],[13].Resultsarereportedfordifferentscopevalues,thatis,thenumberofretrievedimages.First,weinvestigatedtheeffectofH-onlyspaceandH,S,Vtogether.AsexpectedweobtainedmuchimprovementinprecisionforH,S,Vtogether.Atscope=200,itis2.409%inoriginaldimensionand5.849%inreduceddimen-sion(seefigure1).Then,weexperimentedwith

Precision in reduced dimension70H−only: 17−D)%60H,S,V: 25−D( no50isic40erP3020020406080100120140160180200220240260280300Scope

Precision in original dimension70H−only: 136−D)%60H,S,V: 148−D( no50isice40rP3020020406080100120140160180200220240260280300Scope

Figure1:EffectofH-onlyandH,S,Vtogetheron

precisionatdifferentfeaturedimensions

Precision vs. Scope: without relevance feedback7025−D65148−D6055)%(50 nois45icerP4035302520020406080100120140160180200220240260280300Scope

Figure2:Performancecomparisoninoriginaldimensionandreduceddimension

25-D(“D”standsfordimension)featurevectorand148-Dfeaturevectorforscope=300(seefigure2).As,thenumberofsamplespercategoryis200,weshouldbeusingamaximumscopeof200.However,thisisforanidealsystemwherealltherelevantimagesareretrievedastop200images.Wewantedtolookatthebehaviourofbothmethodsatascope

Table1:ImprovementofPrecision(%)from0rfto5rf

Scope20200148-D16.0939.1225-D16.27212.583Performance with relevance feedback: Scope = 2074727068)%( n66oisicerP62605825−D56148−D012345No. of iteration

Figure3:Improvementinprecisionwithrelevancefeedbackatscope20

Performance with relevance feedback: Scope = 20044424038)%( n36oisice34rP32302825−D26148−D012345No. of iteration

Figure4:Improvementinprecisionwithrelevancefeedbackatscope200

beyond200whererelevantimageswillbealloverthespace.

Atscope=20,precisionwith25-Dismarginallyworseby0.7%butatscope=200,itisimprovedby3.297%.Theeffectofdimensionreductionismoreprominentathigherscope.

Infigure3andfigure4,wehaveshownresultsupto5iterationsasafterthatperformancedoesnotimprovesignificantly,whichisgoodindicating

thatthealgorithmconvergesfast.Theimprove-mentwithrelevancefeedback(fromnoiterationtofifthiteration)ismorewith25-Dcomparedto148-Dandalso,itismoreprominentathigherscope,seetable1.Itiswellknownthatmoretrainingsamplesarerequiredasfeaturedimensionincreases.Thuswithiteration,improvementismorewithreduceddimensioncomparedtooriginaldimension.

4ConclusionandFutureDirection

TheadditionofSandVspacewithHspacecontributestotheinformationcontentofimagesandthusenhancessystemperformancesignificantly.TheonlinecomputationwithoriginaldimensionisO(3L2)whereasitisO(3(L+1)inourmethod.Thisreductioninnumberofcomputationswillbeverysignificantintoday’srealsituationwhereimagedatabasesizeisalreadyverybigandisincreasingdaybyday.Atlowscope,theprecisionwithourmethodismarginallyworsewhichcanbeattributedtotheinformationlossduetodimensionreduction.Athigherscope,theeffectoflowdimensionbecomesmoreprominentandasaresult,precisionwithourmethodismuchbetter.

However,weneedtoperformfurtherexperimentswithalargerdatasetandwithdifferentdatasetstogetmoreconfidenceinexperimentalfindingsandgeneralityinsystemperformance.

Also,weneedtoinvestigateotherexistingmethods(e.g.PCA)ofdimensionreductionandcompareperformanceofeachmethod.

References

[1]R.Haralick,K.Shanmugam,andI.Dinstein,

“Texturalfeaturesforimageclassification,”IEEETransactionsonSystems,Man,andCybernetics,no.6,vol.SMC-3,pp.610–621,November1973.[2]J.Huang,Color-spatialImageIndexingand

Applications.CornellUniversity:PhDDis-sertation,1998.

[3]S.ShimandT.Choi,“Imageindexingby

modifiedcolorco-occurrencematrix,”inIn-ternationalConferenceonImageProcessing,September2003.[4]T.Ojala,M.Rautiainen,E.Matinmikko,

andM.Aittola,“Semanticimageretrievalwithhsvcorrelograms,”inProceedingof12thScandinavianConferenceonImageAnalysis,(Bergen,Norway),pp.621–627,2001.[5]A.JainandD.Zongker,“Featureselection:

Evaluation,application,andsmallsampleperformance,”IEEETransactionsonPatternAnalysisandMachineIntelligence,no.2,Vol.19,pp.153–158,February1997.[6]R.O.Duda,P.E.Hart,andD.G.Stork,

PatternClassification,2nded.NewYork,Chichester,JohnWileyandSons,2001.[7]P.Wu,B.Manjunath,andH.Shin,“Di-mensionalityreductionforimageretrieval,”inProceedingIEEEInternationalConferenceonImageProcessing(ICIP2000),(Vancouver,Canada),pp.726–729,Vol.3,September2000.[8]Y.Rui,T.S.Huang,andS.-F.Chang,“Im-ageretrieval:Currenttechniques,promisingdirectionsandopenissues,”JournalofVisualCommunicationandImageRepresentation,no.4,Vol.10,April1999.[9]M.Ortega-BinderbergerandS.Mehrotra,

RelevanceFeedbackinMultimediaDatabases,pp.1–28.CRCPress,Chap23,2003.[10]S.AksoyandR.Haralick,“Aweighteddis-tanceapproachtorelevancefeedback,”inInternationalConferenceonPatternRecogni-tion,(Barcelona,Spain),September2000.[11]E.HoreandS.Ray,“Asum-resultindexing

algorithmforfeaturecombiningincontent-basedimageretrieval,”inProceedingsoftheFourthIASTEDInternationalConferenceSignalandImageProcessing,(Hawaii,USA),pp.283–287,August2002.[12]D.P.HuijsmansandN.Sebe,“Howtocom-pleteperformancegraphsincontent-basedimageretrieval:Addgeneralityandnormalizescope,”IEEETransactionsonPatternAnaly-sisandMachineIntelligence,no.2,Vol.27,February2005.[13]H.Muller,W.Muller,D.M.Squire,and

T.Pun,“Performanceevaluationincontent-basedimageretrieval:Overviewandpropos-als,”inTechnicalReport,ComputerVisionGroup,UniversityofGeneva,December1999.

因篇幅问题不能全部显示,请点此查看更多更全内容