169 Pages
English

Resolutions and Duality for Monomial Ideals

Gain access to the library to view online
Learn more

Description

  • dissertation
  • exposé
Resolutions and Duality for Monomial Ideals by Ezra Nathan Miller B.S. (Brown University) 1995 A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Mathematics in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Bernd Sturmfels, Chair Professor David Eisenbud Professor Richard Fateman Spring 2000
  • exponent on lcm
  • dual of the ideal
  • functor of γi h0m
  • canonical cˇech complex
  • monomial matrices
  • index of symbols
  • index of symbols symbol definition
  • injective complex
  • definition

Subjects

Informations

Published by
Reads 19
Language English
Document size 5 MB

SPARSE SOLUTION OF UNDERDETERMINED LINEAR
SYSTEMS: ALGORITHMS AND APPLICATIONS
A DISSERTATION
SUBMITTED TO THE INSTITUTE FOR
COMPUTATIONAL AND MATHEMATICAL ENGINEERING
AND THE COMMITTEE ON GRADUATE STUDIES
OF STANFORD UNIVERSITY
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
Yaakov Tsaig
June 2007c Copyright by Yaakov Tsaig 2007
All Rights Reserved
iiI certify that I have read this dissertation and that, in my opinion, it
is fully adequate in scope and quality as a dissertation for the degree
of Doctor of Philosophy.
David L. Donoho Principal Advisor
I certify that I have read this dissertation and that, in my opinion, it
is fully adequate in scope and quality as a dissertation for the degree
of Doctor of Philosophy.
Michael A. Saunders
I certify that I have read this dissertation and that, in my opinion, it
is fully adequate in scope and quality as a dissertation for the degree
of Doctor of Philosophy.
Michael Elad
Approved for the University Committee on Graduate Studies.
iiiAbstract
Recent theoretical developments have generated a great deal of interest in sparse sig-
nal representation. The setup assumes a given dictionary of “elementary” signals,
and models an input signal as a linear combination of dictionary elements, with the
provision that the representation is sparse, i.e., involves only a few of the dictionary
elements. Finding sparse representations ultimately requires solving for the spars-
est solution of an underdetermined system of linear equations. Such models arise
often in signal processing, image processing, and digital communications. The main
contributions of this dissertation are in the development and analysis of algorithms
to find sparse solutions of underdetermined linear systems, and in offering potential
applications in signal and image processing.
The first contribution concerns the solution of ‘ minimization problems whose1
solution is sparse. A large volume of work established that the minimum ‘ -norm1
solution to an underdetermined linear system is often, remarkably, also the sparsest
solution to that system. However, general-purpose linear optimization algorithms are
proving to be far too slow for ‘ minimization in many applications. We suggest the1
use of the Homotopy algorithm, originally developed by Osborne et al. and Efron et
al.tosolve‘ minimizationproblemsinanoverdeterminedsetting. Intheunderdeter-1
mined context, we show that when the solution is sufficiently sparse, Homotopy has
the followingk-step solution property: If the sparsest solution to an underdetermined
system has at mostk nonzeros, the algorithm recovers it in exactlyk iterative steps.
When the conditions for this property are met, Homotopy runs in a fraction of the
time it takes to solve the‘ minimization problem with general-purpose solvers. Our1
analysis sheds light on the evident parallelism in the ability of ‘ minimization and1
ivOrthogonal Matching Pursuit (OMP) to recover sparse solutions, by pointing out a
series of connections between the two seemingly disparate approaches.
The second contribution addresses the problem of finding sparse solutions to un-
derdetermined linear systems from a different perspective, introducing Stagewise Or-
thogonal Matching Pursuit (StOMP), a rapid iterative method to find sparse ap-
proximate solutions to such systems. Each iteration of the algorithm simply involves
residualization,thresholding,andprojection,withthetotalnumberofiterationssmall
and fixed. Thus, StOMP is much faster than competing approaches to recover sparse
solutions, and at the same time, as our work demonstrates, its ability to recover the
sparsest solution is comparable with that of full-blown ‘ minimization.1
Thethirdcontributionofthisdissertationcomesinanalyzingtheperformanceand
applicabilityofthetheoryofCompressedSensingfromapractitioner’sviewpoint. The
notionofCompressedSensingproposesthatasignalorimage,unknownbutsupposed
to be compressible by a known transform, can be subjected to fewer measurements
than the nominal number of data points, and yet be reconstructed accurately by
solving an ‘ minimization problem. We offer an empirical analysis of Compressed1
Sensing, establishing that it may be applied successfully in various practical settings.
Inaddition,weofferseveralextensionstotheoriginalproposal,anddescribeanatural
application of this theory for fast acquisition of MRI data.
Finally, we apply the sparse representation model to a classical problem in digital
communications, of designing codes that can correct errors in a noisy communica-
tion environment. We consider a digital channel model corrupted by two types of
interference: additive white Gaussian noise, and additive impulse noise. We propose
a class of ‘random’ codes for robust transmission over this communication channel,
which may be decoded by solving an ‘ minimization problem using the Homotopy1
algorithm. We present simulation results demonstrating that for certain channel con-
figurations, the rate at which we can reliably transmit information with a negligible
probability of error using the proposed codes is comparable to the fundamental limits
set by Shannon’s capacity.
vAcknowledgements
ThejourneyleadingtothisPh.Dthesishasbeenawonderfulandoftenoverwhelming
experience. I am indebted to many people who made the time spent working on this
thesis an unforgettable experience.
It is hard to overstate my gratitude to my Ph.D advisor, David Donoho. To be
in the presence of such a prodigal mind is an eye-opening experience. With his guid-
ance, I learned to challenge traditional patterns of thought, and to tackle seemingly
unsolvable scientific problems. He has taught me to follow high scientific standards
in my work. He has shaped and influenced my thinking in a way no other person did.
IamalsoindebtedtoMichaelSaunders, whohasfollowedandguidedmyresearch
throughout my time at Stanford. I have lost count of the number of times I came to
him for optimization advice, and left his office with a mind full of new directions and
insights. His valuable feedback and expert counsel helped shape much of the research
carried out while working on this thesis. Most importantly, Michael is one of the
kindest people I have met, and I wholeheartedly enjoyed spending time with him.
I am grateful to Michael Elad for his infinite support during my first year in
the program. He took me under his wing, and showed me how to tackle research
problemsandstimulateoriginalthought. Hehastaughtmethatwithclear,structured
thinking, any scientific challenge, no matter how hard, is within reach. His energy
and dedication are an inspiration to me.
IhavehadthepleasureofknowingandworkingwithGeneGolub, oneofthemost
inspirationalandinfluentialfiguresinthenumericalanalysiscommunity. Ithankhim
for his guidance and support during my years at Stanford. His teachings opened my
mind to the beauty of linear algebra and numerical analysis, and shaped my career
vipath as well as my interests.
I would like to thank Tsachy Weissman for his counsel on the error correction
work appearing in this thesis. His expert advice and stimulating discussions are a
major part of that work. I am also grateful to Biondo Biondi, Stephen Boyd, and
Robert Tibshirani, who took the time and effort to serve on my oral committee.
IwouldliketoexpressmydeepestgratitudetoIndiraChoudhury, whowasalways
there to help with the administrative requirements of the program. I am forever
indebted to her for the tremendous support and help she offered in arranging my
oral defense. I also thank Helen Tombropoulos and the administrative staff in the
Statistics Department at Stanford, my home away from home in the past few years.
Special thanks go to my friends and colleagues Miki Lustig, Ofer Levi, Tomer
Yahalom, Vicki Stodden, and Armin Schwartzman, for their advice and for countless
hours spent discussing fruitful ideas, as well as for their support and encouragement
throughout this endeavor.
I am forever grateful to Kristen Backor for her endless love and uncompromising
supportthroughoutthisentirejourney. Sharingmylifewithherinthepastfewyears
made my time at Stanford one of the sweetest and most unforgettable periods in my
life.
Lastly, I wish to thank my parents, Sammy and Varda Tsaig. They bore me,
raised me, supported me, taught me, and loved me. To them I dedicate this thesis.
viiContents
Abstract iv
Acknowledgements vi
1 Introduction 1
1.1 The Search for Sparsity . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 In Pursuit of Sparse Solutions . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Making Sense of Random Sensing . . . . . . . . . . . . . . . . . . . . 5
1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
I Algorithms 11
2 Sparse Solution of ‘ Minimization Problems with the Homotopy1
Method 12
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.1 Greedy Approaches . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.2 A Continuation Algorithm To Solve (P ) . . . . . . . . . . . . 141
2.1.3 LARS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.4 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.1.5 Relation to Earlier Work . . . . . . . . . . . . . . . . . . . . . 22
2.1.6 Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2 The Homotopy Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3 Computational Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
viii2.3.1 Running Times . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3.2 Complexity Analysis . . . . . . . . . . . . . . . . . . . . . . . 28
2.3.3 Number of Iterations . . . . . . . . . . . . . . . . . . . . . . . 29
2.4 Fast Solution with the Incoherent Ensemble . . . . . . . . . . . . . . 32
2.4.1 Correct Term Selection . . . . . . . . . . . . . . . . . . . . . . 34
2.4.2 Sign Agreement . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4.3 Proof of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . 37
2.5 Fast Solution with the Uniform Spherical Ensemble . . . . . . . . . . 39
2.5.1 Phase Diagrams and Phase Transitions . . . . . . . . . . . . . 40
2.5.2 The k-Step Solution Property . . . . . . . . . . . . . . . . . . 41
2.5.3 Correct Term Selection and Sign Agreement . . . . . . . . . . 43
2.5.4 Random Signs Ensemble . . . . . . . . . . . . . . . . . . . . . 46
2.6 Fast Solution with Partial Orthogonal Ensembles . . . . . . . . . . . 47
2.7 Bridging ‘ Minimization and OMP . . . . . . . . . . . . . . . . . . . 521
2.7.1 ‘n−→ Homotopy . . . . . . . . . . . . . . . . . 531
2.7.2 Homotopy−→ LARS . . . . . . . . . . . . . . . . . . . . . . . 53
2.7.3 LARS−→ OMP . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.8 Homotopy and PFP . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.9 Recovery of Sparse Solutions . . . . . . . . . . . . . . . . . . . . . . . 59
2.10 Example: Component Separation . . . . . . . . . . . . . . . . . . . . 61
2.11 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3 Rapid Approximate Solution of Underdetermined Linear Systems
with Stagewise Orthogonal Matching Pursuit 67
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.1.1 STOMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.1.2 Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.2 Stagewise Orthogonal Matching Pursuit . . . . . . . . . . . . . . . . 71
3.2.1 The Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.2.2 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.2.3 Approximate Gaussianity of Residual Correlations . . . . . . . 75
ix3.2.4 Threshold Selection . . . . . . . . . . . . . . . . . . . . . . . . 78
3.3 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.4 Computational Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.4.1 Complexity Analysis . . . . . . . . . . . . . . . . . . . . . . . 81
3.4.2 Running Times . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.5 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.5.1 Partial Orthogonal Ensembles . . . . . . . . . . . . . . . . . . 85
3.5.2 Other Coefficient Ensembles . . . . . . . . . . . . . . . . . . . 86
3.5.3 Noisy Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.6 Example: Component Separation . . . . . . . . . . . . . . . . . . . . 88
3.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
II Applications 92
4 Extensions of Compressed Sensing 93
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.2 ‘ Sparsity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 970
4.3 ‘ Sparsity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101p
4.4 Different Ensembles of CS Matrices . . . . . . . . . . . . . . . . . . . 106
4.5 Denoising CS Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.6 Noise-Aware Reconstruction . . . . . . . . . . . . . . . . . . . . . . . 108
4.7 Two-Gender Hybrid CS . . . . . . . . . . . . . . . . . . . . . . . . . 111
4.8 Multiscale Compressed Sensing . . . . . . . . . . . . . . . . . . . . . 118
4.9 Compressed Sensing with StOMP . . . . . . . . . . . . . . . . . . . . 121
4.10 MRI Acquisition . . . . . . . . . . . . . . . . . . . . . . 126
4.11 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
5 Error Correction in a Mixed Interference Channel 131
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
5.2 Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
5.3 Decoding, σ = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134U
x