  # Tutorial 2 - Data Structures '10

- English
25 Pages
Learn all about the services we offer

Description

Tutorial 2Data Structures ’10Jonathan Cederberg thWednesday, September 15 , 2010First assignment Probability Sorting Invariants Second assignment SlackOutline1 First assignment2 Probability3 Sorting4 Invariants5 Second assignment6 Slack2 DS’10 | Tutorial 2 (Data Structures ’10)First assignment Probability Sorting Invariants Second assignment SlackThings to rememberThe deﬁnitions of ;O:A sum grows according to its fastest growing term (the leftone)Polynomials grow faster than polylogsExponentials grow faster than polynomialsLarger exponent =) faster growthLarger base =) faster growth4 DS’10 | Tutorial 2 (Data Structures ’10)First assignment Probability Sorting Invariants Second assignment SlackReview of assignment 1 nn 3lgn 2n 4 n! lgn 2 (n +1)!23 n nn nlgn 2 n25 DS’10 | Tutorial 2 (Data Structures ’10)First assignment Probability Sorting Invariants Second assignment SlackCommentsI have still not received any questions on e-mail. Use thispossibility to inﬂuence these sessions.Deadlines are hard. Late submissions of the assignments willnot be considered. You just have to get the 24 points.When presenting proofs, use some words like “since” and“therefore”. The three dots are forbidden.Answers without explicit justiﬁcations give 0 pts in an exam.Numerical justiﬁcation is no justiﬁcation.6 DS’10 | Tutorial 2 (Data Structures ’10)First assignment Probability Sorting Invariants Second assignment ...

Subjects

##### Formal sciences

Informations

Report a problem Jonathan
Cederberg
Tutorial 2 Data Structures ’10
<u.seit.uonjahtaec.nbred@gre>
Wednesday, September 15th, 2010 SD2attSurtcruse1)010|Tutorial2(Da
Slack
6
4
Invariants
Second assignment
5
Outline
1
First assignment
2
Probability
3
Sorting Things to remember
The deﬁnitions ofΩ,O.Θ A sum grows according to its fastest growing term (the left one) Polynomials grow faster than polylogs Exponentials grow faster than polynomials Larger exponent=faster growth Larger base=faster growth
)10sretu|TutS104DitilorySPrntabobissaemngFtsrignmentSlcondassiirnasteSitgnnIavkca mnnessgikcStaliantnvarondasSecytilibabIgnitroSigsstarsrotPennm5SD1|0uTot
32n
nlgn2nn2n
n3
4lgn
n
lgn22n(n+1)!
n!
Review of assignment 1 I have still not received any questions on e-mail. Use this possibility to inﬂuence these sessions. Deadlines are hard. Late submissions of the assignments will not be considered. You just have to get the 24 points. When presenting proofs, use some words like “since” and “therefore”. The three dots are forbidden. Answers without explicit justiﬁcations give 0 pts in an exam. Numerical justiﬁcation is no justiﬁcation. Ois not the same as! Exponentiation is right associative Use logarithms with care!
7DSTuto10| Note thatE[]is a linear operator, i.e. X(aiXi) E"i=n1#=i=nX1(aiE[Xi])
Review of probability theory Asample space Sis the set of all possible outcomes. Anevent Ais a subsetAS A random variable is often denotedX Theexpected value E[X]of a random variableXis deﬁned as E[X] =XxPr{X=x} x Unbiased-Random 1 while true 2do 3xBiased-Random 4yBiased-Random 5ifx6=y 6thenreturnx
Exercise: Bleaching You have a function, Biased-Random, that returns 1 with probabilitypand 0 with probability 1p you do not. Sadly knowp a function Unbiased-Random that returns 1. Design with probability 1/2 and 0 with probability 1/2. |Tutorial2(DataS1D1S01curterut01s
or vice versa. Since
Exercise (cont.): Bleaching Why does this work? Because Unbiased-Random only returns whenx=0 andy=1 Pr{Unbiased-Randomreturns0}= Pr{x=0y=1}= (1p)p= p(1p) = Pr{x=1y=0}= Pr{Unbiased-Randomreturns1} and there are no other outcomes, Unbiased-Random is fair. Note that this relies on that the calls to Biased-Random are independent.
)ProbmentitySabilgnnIroitnastavirriFngissatscoSeasndgnsintmecalSk By the deﬁnition of expected value we have for any indicator random variableXA E[XA] =E[I{A}] =1Pr{A}+0Pr{S\A} =1Pr{A}+0Pr{S\A} =Pr{A}
Deﬁnition Anindicator random variable I{A}of an eventAis a deﬁned as I{A}=01ififAArcoucsnotdoesoccur 