Lecture 23 - Thu 2020 11 19 - Virtual % ============================================================================ % Continue with Rossler ... Lecture starts on #L# below. % % ---------------------------------------------------------------------------- % % (1) Vertical structure, z direction. % ------------------------------------- % .... % % ---------------------------------------------------------------------------- % % (1.1) Cantor sets. Fractals. % --------------------------------------------- % ... % % ---------------------------------------------------------------------------- % % (1.2) Dimension. % --------------------------------------------------------- % ... #L# Lecture starts: Last lecture we introduced the notion of self-similar dimensions for sets that are self similar under a stretching transformation: Set can be divided into N subsets of relative size r < 1, each of which yields back the full set after a stretching by a factor of 1/r. Then d = -log(N)/log(r) [i.e.: N = (1/r)^d]. We now extend the notion of dimensions to non-self-similar sets. 1.2c Box dimension. % -------------------------------------------------------- % Let N be the minimum number of "boxes" of size r needed to cover the object [the boxes could be balls, or cubes; some fixed open shape]. Then let d = - lim_{r \to 0} log(N)/log(r) [if the limit exists]. Motivation: note that N = O(1/r^d) as r vanishes; which returns of the over-arching idea of dimension as: "how many "directions" are there in the set". Motion in each direction allows you to place 1/r balls along the direction, resulting in ~(1/r)^d balls total. "Obviously" the Box dimension is equal to the self-similar dimension if calculated for a self-similar set.. Example: box dimension for a non-self-similar fractal [11.4.2 book]. Pick a Cantor set [or Sierpinski gasket] where stuff removed is random interval/square [not always middle third, say]. Example: the box dimension of both rationals in [0 1] (#1) and all of [0 1] is 1. ................................................ [A] #1. Why? Because when covering the rationals in [0 1] with intervals of size r, no gaps can be left, thus N is the same as needed to cover the full interval. The same argument shows: the irrationals box dimension is 1. In fact, the box dimensions for *any* dense set is also 1 -- e.g.: the rational numbers with denominator 2^n, any n. [A] poses conceptual problem; the definitions above are too coarse to tell the difference between the rationals and real numbers! Hence the Hausdorff dimension was introduced. 1.2d Hausdorff dimension. % -------------------------------------------------- % d = dim_H(X) = inf A, where A = set of numbers a > 0 with the property: For any epsilon > 0, there is a delta > 0 and a cover U of X such that -1- All sets in U have diameter < delta. -2- Sum_{B in U} (diam(B))^a < epsilon. Note: if all the sets in U have the same diameter r < delta, and N is the number of sets in U, then 2 says: N*r^a < epsilon The box dimension db says: N ~ c*r^{-db}; i.e.: N*r^db = O(1). Thus, if the box dimension exists and a > db, N*r^a ~ (N*r^db)*r^{a-db} becomes arbitrarily small as r gets small. This shows that, when the box dimension exists, Hausdorff dimension < box dimension. The Hausdorff dimension does not require some limit to exist: Since the elements of A are positive, *** as long as A is non-empty, dim_H exists. *** Furthermore, for a bounded X embedded in R^n, any a > n is in A [A is non-empty]. Example: The Hausdorff dimension of [0 1] is 1. Proof: in the definition above, if a < 1 and delta <= 1, Sum_{B in U} (diam(B))^a > Sum_{B in U} diam(B) >= 1. On the other hand, for delta > 1 (any a) Sum_{B in U} (diam(B))^a > Sum_{B in U} 1 >= 1. Thus no a < 1 belongs to A. But all a > 1 are in A. Example: The Hausdorff dimension of the rationals in [0 1] is 0. Proof: Order the rationals {q_n}_{1 <= n < infty} and let U(r) be the cover where the B's are intervals of length r^n centered at q_n, where 0 < r < 1. Then Sum_{B in U} (diam(B))^a < r^a/(1-r^a). For any fixed a > 0, this can be made arbitrarily small by selecting r small enough. So all these a are in A. 1.2e Point-wise and corrolation dimensions % --------------------------------- % The box dimension is very expensive to compute numerically, and the Haussdorf dimension is much worse. In computations another notions of dimension are often used: Point-wise and corrolation dimensions. Idea: assume a LONG trajectory with equally spaced points along it [see #2], tracking an attractor. Near a point x in attractor, count the number of points in trajectory within distance r of x. Let this number be N = N(x, r). Then, if N ~ r^d, d IS THE POINTWISE DIMENSION. #2 For this to make sense, the points should be equally spaced in phase space (i.e.: by arch-length), while typically one has equally spaced in time. But this is OK, because near any point the phase-space speed is approx. constant. Now, in principle (and in practice) one may get a *different* d for *different* parts of the fractal [multi-fractal]. In this case: Average N over the set [i.e.: average N over many positions x]. Then N_av ~ r^d, d IS THE CORRELATION DIMENSION. 1.2f General remarks about dimension. % -------------------------------------- % Note that measuring dimension is a **tricky** **business**; for a reliable calculation lots of data is needed, so that in a plot of log(N) versus log(r), a clear range over which a straight line (slope is d) is observable. This means that one needs data for r in a range of several decades [e.g.: r_max/r_min has to be rather large]. This is related to the "saturation" issue. ISSUES WITH SATURATION: resolution in the computation [or measurement] limits how small r can be taken. If N(r) is too small, the discreteness of the data becomes an issue --- adding just one point produces a large relative change in N; and at some threshold one just gets N(r) = 1 [the point at the center of the ball of radius r is the only point] ... i.e.: "d=0". On the other hand, for r too large, the whole set is included -- with N the total number of points. Thus, in a plot of log(N) versus log(r) the region with a constant slope may not be very big. To get a reliable measure, one needs this region to have *several* decades in r, which requires a huge number of points and high resolution. In particular, for a fractal one wants to have enough accuracy to reliably claim that d is NOT an integer ... but if d = 2.10 \pm 0.20, one cannot do this. Be skeptical of published fractal dimensions, unless accompanied by "error bars" properly computed. % % ============================================================================ % EOF