Tenney-Euclidean temperament measures
Introduction[edit]
Given a multival or multimonzo which is a wedge product of weighted vals or monzos, we may define a norm by means of the usual Euclidean norm. We can rescale this by taking the sum of squares of the entries of the multivector, dividing by the number of entries, and taking the square root. This will give a norm which is the RMS (root mean square) average of the entries of the multivector. The point of this normalization is that measures of corresponding temperaments in different prime limits can be meaningfully compared. If M is a multivector, we denote the RMS norm as ||M||.
TE Complexity[edit]
Given a wedgie M, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||M|| is a measure of the complexity of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been extensively studied by Graham Breed, and we may call it Tenney-Euclidean complexity since it can be defined in terms of the Tenney-Euclidean norm.
In fact, while TE complexity is easily computed if a routine for computing multivectors is available, such a routine is not required, as it can also be computed using the Gramian. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the dot product of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the TE norm we have
where C(n, r) is the number of combinations of n things taken r at a time, and vi.vj is the TE symmetric form on vals, which in weighted coordinates is simply the ordinary dot product. Here n is the number of primes up to the prime limit p, and r is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie.
If V is a matrix whose rows are the weighted vals vi, we may write det([vi.vj]) as det(VV*), where V* denotes the transpose. If W is diagonal matrix with 1, 1/log2(3), ..., 1/log2(p) along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV*) = det(AW^2A*). This may be related to the TE tuning projection matrix P, which is V*(VV*)^(-1)V, and the corresponding matix for unweighted monzos P = A*(AW^2A*)^(-1)A.
TE simple badness[edit]
If J = <1 1 ... 1| is the JI point in weighted coordinates, then the simple badness of M, which we may also call the relative error of M, is defined as ||J∧M||. This may considered to be a sort of badness which heavily favors complex temperaments, or it may be considered error relativized to the complexity of the temperament: relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that ai = J.vi/n is the mean value of the entries of vi. Then note that J∧(v1-a1J)∧(v2-a2J)∧...∧(vr-arJ) = J∧v1∧v2∧...∧vr, since wedge products with more than one term J are zero. The Gram matrix of the vectors J and v1-aiJ will have n as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain:
TE error[edit]
TE simple badness can also be called and considered to be error relative to complexity. By dividing it by TE complexity, we get an error measurement, TE error. Multiplying this by 1200, we get a figure we can consider to be a weighted average with values in cents.
From the ratio (||J∧M||/||M||)^2 we obtain C(n, r+1)/(n C(n, r)) = (n-r)/(n (r+1)). If we take the ratio of this for rank one with this for rank r, the "n" cancels, and we get (n-1)/2 * (r+1)/(n-r) = (r+1)(n-1)/(2(n-r)). It follows that dividing TE error by the square root of this ratio gives a constant of proportionality such that if Ψ is the TE error of a rank r temperament then
is an adjusted error which makes the error of a rank r temperament correspond to the errors of the edo vals which support it; so that requiring the edo val error to be less than (1 + ε)ψ for any positive ε results in an infinite set of vals supporting the temperament. ψ can be related to TE error as it appears on Graham Breed's temperament finder, which we will call RMS error; if G denotes the RMS error for a temperament, then
RMS error and ψ error both have the advantage that higher rank temperament error corresponds directly to rank one error, but the RMS normalization has the further advantage that in the rank one case, G = sin θ, where θ is the angle between J and the val in question. Multiplying by 1200 to obtain a result in cents leads to 1200 sin θ, TE error as it appears on the temperament finder pages. RMS error may also be found using the TE tuning map; if T is the tuning map, then
where the dot represents the ordinary dot product. If T is denominated in cents, then J should be also, so that J = <1200 1200 ... 1200|.