On Discovery: a geometric view of Hadamard matrices

February 7, 2013

OK, it’s ARC Discovery grant application time yet again. I have posted about this before, so won’t go into all that again, but… a discussion with someone in the Research Office of the College of Physical and Mathematical Sciences led me to start rethinking what I was going to propose to investigate. I started thinking again about the Hadamard Conjecture, the conjecture that Hadamard matrices exist for every dimension 4k, but this time, geometrically.

What is a Hadamard matrix? It is a square matrix H or order n  containing only the entries -1 and 1 such that the columns (and the rows) are pairwise orthogonal. That is, the matrix H^T H is a multiple of the identity, in fact, H^T H = nI. So H itself is \sqrt{n} times an orthogonal matrix. All this is well known, but what does it really mean geometrically?

Consider an n-dimensional cube of side length 2, with centre the origin, and aligned with the coordinate axes. Each of its 2^n vertices has coordinates (\pm 1, \pm 1, \ldots, \pm 1). So any square matrix A of order $n$ containing only the entries -1 and 1 will map each of the $n$ canonical unit basis vectors e_1 := [1, 0, \ldots, 0]^T, e_2 := [0, 1, 0, \ldots, 0]^T etc. to a vector corresponding to one of these 2^n vertices. Let’s call the unit vectors corresponding to each of these 2^n vertices diagonal directions. Each pair of opposite vertices v and -v lie on a line through the origin, which we will call a diagonal line. There are 2^{n-1} diagonal lines. If the matrix A is Hadamard, then these n vectors A e_1, \ldots A e_n will be pairwise orthogonal. Thus if a Hadamard matrix of order n exists, then it is possible to find a subset of n diagonal lines out of the full set of 2^{n-1} diagonal lines, such that the n diagonal lines in the subset are pairwise orthogonal.

As a simple example, consider n=2. Let S be the square with side length 2, centre the origin, aligned with the coordinate axes. The Hadamard matrix

\left[  \begin{array}{cc}  1 & 1  \\  1 & -  \end{array}  \right]

(with - standing for -1) maps the standard basis vectors as follows: (1,0) \rightarrow (1,1), (0,1) \rightarrow (1,-1): that is, the point (1,0) is mapped to the top right corner of S, and the point (0,1) is mapped to the bottom right hand corner of S. The key point here is that the diagonals of the square S are at right angles to each other.

In a subsequent post, I will describe a way of taking a Hadamard matrix of order n = 2 k, and decomposing its correspoding special orthogonal matrix into a commuting product of rotations in k mutully orthogonal 2-planes.

A curious tale

February 2, 2012

Curiosity can lead to interesting places. The following paragraphs are excerpted from one of my recent job applications.

… I have been interested in science, mathematics, computer science from an early age. In 1975, I attended a winter workshop in programming at what is now the Powerhouse Museum in Sydney. I attended the AAMT Summer School at ANU in 1977. My undergraduate degree at UNSW focused on mathematics as well as computer science. Before starting on my Honours project in computer science at UNSW, I was a vacation scholar in mathematics at ANU.

During my subsequent career in various programming and information technology roles, including data communications, I maintained a keen amateur interest in science, especially astronomy and space science.
My interest in mathematical research was rekindled in 1995 when I first heard of Grassmann and Clifford algebras.

Being a member of the ACS, ACM and IEEE, and having completed a Masters degree in Information Systems at UNSW, I was well aware of both Moore’s Law and the development of the Internet.

In the period 1996 to 2001, as a secondary researcher at Accenture, my role included tracking the capabilities of hardware and software. At Accenture I learnt that, increasingly, science and engineering was being conducted via “grand challenge” simulations on supercomputers on the “grid”. In short, I knew that computers were finally becoming fast enough that more realistic calculations and simulations could be performed.

In 2000, I decided to pursue a PhD in Computational Science. My choice of a Coursework Masters in mathematics at UNSW was based on the number of course exemptions in mathematics I could obtain there. My project resulted in the open source GluCat library for computation in Clifford algebras, as well as my first mathematical research paper.

In 2001, immediately after leaving Accenture, I worked as a programmer and a laboratory tutor at UNSW, learning Maple, Matlab, Fortran 90 and ScaLAPACK in the process. I was fortunate to share an office with the High Performance Computing Support Unit and UNSW. In 2005, towards the end of my PhD, I found a job at the University of Sydney, and there quickly learnt IDL, Labview, Java. MySQL and R. While there, I kept up an informal liaison with ac3, attending many of their meetings. …

Parallel enumeration of restricted growth strings, using MPI

January 31, 2012

A152139(116)=7897, says Orac (64 processors using MPI), after enumerating 163305339345225 restricted growth strings.
http://oeis.org/A152139
Thanks to Joerg Arndt for the original code from FXT.

I’m intending to write this up in about a month’s time.  In the meantime, see “A conjecture on the alphabet size needed to produce all correlation classes of pairs of words”, ACCMCC, 2010.

ARC grant application season (again)

March 12, 2011

It’s now, yet again, time for me, as a postdoc, to consider asking the Australian Research Council for money. Why? My contract finishes at the end of the year, and having my salary guaranteed by the ARC for another 3 years, in the form of a Discovery Early Career Researcher Award (DECRA) will enable me to bridge the early career gap between seemingly endless postdoc positions and the first rung of the academic ladder, a position as a lecturer.

Updated GluCat: now for AGACSE 2010.

June 2, 2010

From http://sourceforge.net/projects/glucat/ :

The sqrt() and log() functions, and functions based on them, such as cos(), acosh(), etc. may give incorrect results when using GluCat 0.5.0 and earlier. This is because the algorithms used in GluCat 0.5.0 and earlier for sqrt(x) and log(x) fail if x is represented as a matrix having a negative real eigenvalue.

This has now been fixed in GluCat 0.5.1, with the aid of external libraries which contain eigenvalue functions.

o See Approximating functions in Clifford algebras:
What to do with negative eigenvalues? AGACSE 2010.
( To be posted on http://www.maths.anu.edu.au/~leopardi )

A research agenda

December 1, 2009

My current and future research naturally splits into those two categories.

  1. Current research: unfinished business
    This includes a number of investigations in various stages of completion. The following list is not exhaustive.

    1. Spherical codes with reasonable separation, discrepancy and energy.
      This consists of publishing the main results of the remainder of my PhD thesis, and linking the new paper to the existing papers and the EQSP Matlab toolbox.
    2. Quadrature using sparse grids on products of spheres.
      The remaining research consists of understanding why quadrature error is initially so slow to converge in the case studied so far, and comparing this to another case where initial convergence is faster.
    3. Combinatorics of words on strings.
      I believe that my conjecture on the alphabet size required to exhibit all correlation patterns can be proven, and have been investigating proof techniques. This is an area with scope for collaboration.
    4. Polynomial interpolation on the sphere, random matrices and related conjectures.
      I’ve just begun to explore the connection between polynomial interpolation on the sphere and random matrices. Some standard techniques may help in this investigation.
    5. Various observations involving Clifford algebras.
      Some of these observations simply need to be more properly written up and published somewhere, perhaps initially here on my blog. These include the behaviour of determinant of automorphisms of Clifford algebras, and the relationship between Clifford algebra norms, the Frobenius norm and sums of squares. Other observations need further investigation and proof, for example, the relationship between tms-nets and the real representations of Clifford algebras in a canonical ordering of basis elements.
  2. New research: practical approximation with Clifford algebras.
    This program of research is essentially the program which I proposed in my ARC DP 2010 application. It would include four main themes:

     

    1. Approximation of functions:
      The approximation of functions in Clifford algebras, including Padé approximation.
    2. Constructive approximation:
      The study of multidimensional differential and integral operators, and associated function spaces and bases, including kernels and polynomial bases.
    3. Compatible discretization:
      The study of discretization in relation to Clifford analysis, and the relationships between Clifford analysis and discrete and continuous exterior calculus.
    4. Approximation of solutions to equations:
      The study of Clifford approximations to the solution of differential and integral equations.

    The program would consist of three main threads, distinguished by the types of outcomes and deliverables expected:

    1. Techniques
      1. New and improved algorithms for the approximation of Clifford, matrix and vector functions;
      2. New and improved algorithms for the approximate solution of various equations;
      3. New and improved signal processing and related engineering algorithms.
    2. Theory
      1. Theory which ideally explains why the new algorithms are faster and more accurate than existing algorithms, or in the worst case, explains why the existing algorithms are the best possible;
      2. Theory which improves our understanding of the relationships (1) between Clifford analysis and the approximation of matrix functions, (2) between Clifford analysis and constructive approximation, and (3) between Clifford analysis, discrete exterior calculus and compatible discretization.
    3. Tools
      Improvements to the available open source software packages for calculation with Clifford algebras, including implementations of the new algorithms, and sufficient documentation to make the new algorithms practical and immediately usable.

The SFLC could have done a better job with its Bilksi brief

October 5, 2009

(Reposted from my comment on a Groklaw article)

The SFLC Bilski brief argues that (1) the US Supreme Court has essentially held that mathematical algorithms cannot be patented, and that (2) if mathematics were to be patented, this would be bad for innovation:

… This Court has repeatedly held that subject matter which would have the practical effect of preempting laws of nature, abstract ideas or mathematical algorithms is ineligible for patent protection.

II. Excluding Software From Patentable Subject Matter Maximizes Innovation In Software

If mathematics were patentable, there would be less mathematical innovation. Only those who were rich enough to pay royalties, or who benefited from subsidization by government, or who were willing to sign over the value of their ideas to someone richer and more powerful than themselves, would be permitted access to the world of abstract mathematical ideas. Theorems build upon theorems, and so the contributions of those who could not pay rent— and all the further improvements based upon those contributions — would be lost. …

The second point of the SFLC could have been made stronger by reference to (e.g.) the 1991 Report of the Committee on Algorithms and the Law of the Mathematical Programming Society.

It seems clear from the previous discussion that the nature of work on algorithms is quite different from that in other fields where the principles of patents apply more readily. This in itself is a strong argument against patenting algorithms. In addition, we believe that the patenting of algorithms would have an extremely damaging effect on our research and on our teaching, particularly at the graduate level, far outweighing any imaginable commercial benefit. Here is a partial list of reasons for this view:

  • Patents provide a protection which is not warranted given the nature of our work.
  • Patents are filed secretly and would likely slow down the flow of information and the development of results in the field.
  • Patents necessarily impose a long-term monopoly over inventions. This would likely restrict rather than enhance the availability of algorithms and software for optimization.
  • Patents introduce tremendous uncertainty and add a large cost and risk factor to our work. This is unwarranted since our work does not generate large amounts of capital.
  • Patents would not provide any additional source of public information about algorithms.
  • Patents would largely be concentrated within large institutions as universities and industrial labs would likely become the owners of patents on algorithms produced by their researchers.
  • Once granted, even a patent with obviously invalid claims would be difficult to overturn by persons in our profession due to high legal costs.
  • If patents on algorithms were to become commonplace, it is likely that nearly all algorithms, new or old, would be patented to provide a defense against future lawsuits and as a potential revenue stream for future royalties. Such a situation would have a very negative effect on our profession.

The first point of the SFLC brief could have been stronger as well. It may not be entirely clear that the Supreme Court has ruled that mathematical algorithms as such are unpatentable. The FSF Bilksi brief argues that algorithms with insufficient “post-solution activity” are unpatentable:

A. THIS COURT HAS RULED THAT INFORMATION PROCESSING ALGORITHMS WITH “INSIGNIFICANT POST-SOLUTION ACTIVITY” ARE BARRED FROM PATENT-ELIGIBILITY.

There is little controversy that information processing algorithms in their pure, ethereal forms, with no physical component or manifestation of any sort, are excluded from patentability. [ Novel mathematics, for example, is outside the scope of patent-eligibility. “Whether the [mathematical] algorithm was in fact known or unknown at the time of the claimed invention … it is treated as though it were a familiar part of the prior art.” Parker v Flook, 437 U.S. 584, 591-92 (1978) (internal citation omitted). ] However, what is under debate is how much of a physical manifestation an information processing
algorithm must have before it is patentable.

In a trio of opinions issued over the span of nine years, this Court clearly rejected the patentability of an information processing algorithm with “insignificant post-solution activity” appended. First, in Gottschalk v. Benson, 409 U.S. 63 (1972), the Court quoted approvingly the 1966 President’s Commission on the Patent System: …

Second, in Parker v. Flook, 437 U.S. 584 (1978), this Court made a more general statement, reiterating the position that loading an algorithm onto a standard computer is merely an attempt to circumvent recognized limitations: …

Third, in Diamond v. Diehr, 450 U.S. 175 (1981), the Court directly reiterated its two previous holdings, while also acknowledging that bona fide, patent-eligible inventions may include a software component: …

To me, the crux of the matter seems to lie in the part of judgement of the US Federal Circuit in re Alappat 33 F.3d at 1543 n.19, 31 USPQ2d at 1556 n.19 (1994) which essentially states that the US Supreme Court has been confused, inconsistent or at best unclear in its understanding of mathematics and how it relates to patentable subject matter:

19 The Supreme Court has not been clear, however, as to whether such subject matter is excluded from the scope of Section 101 because it represents laws of nature, natural phenomena, or abstract ideas. See Diehr, 450 U.S. at 186 (viewed mathematical algorithm as a law of nature); Benson, 409 U.S. at 71-72 (treated mathematical algorithm as an ‘idea’). The Supreme Court also has not been clear as to exactly what kind of mathematical subject matter may not be patented. The Supreme Court has used, among others, the terms ‘mathematical algorithm, ‘mathematical formula,’ and ‘mathematical equation’ to describe types of mathematical subject matter not entitled to patent protection standing alone. The Supreme Court has not set forth, however, any consistent or clear explanation of what it intended by such terms or how these terms are related, if at all.

20 The Supreme Court’s use of such varying language as ‘algorithm,’ ‘formula,’ and ‘equation’ merely illustrates the understandable struggle that the Court was having in articulating a rule for mathematical subject matter, given the esoteric nature of such subject matter and the various definitions that are attributed to such terms as ‘algorithm,’ ‘formula,’ and ‘equation,’ and not an attempt to create a broad fourth category of excluded subject matter.

The Manual of Patent Examining Procedure, 2106.02 Mathematical Algorithms [R-5] – 2100 Patentability says this ruling “recognized the confusion”.

In practical terms, claims define nonstatutory processes if they:

  • consist solely of mathematical operations without some claimed practical application (i.e., executing a “mathematical algorithm”); or
  • simply manipulate abstract ideas, e.g., a bid (Schrader, 22 F.3d at 293-94, 30 USPQ2d at 1458-59) or a bubble hierarchy (Warmerdam, 33 F.3d at 1360, 31 USPQ2d at 1759), without some claimed practical application.

Professor Lee A. Hollaar (whose Bilski amicus brief has been previously discussed on Groklaw) holds in his treatise that the Alappat decision affirmed the US Patent Office’s practice on software related inventions, including the idea that a programmed general purpose computer is a specialized machine.

Alappat allowed the Federal Circuit to restate and clarify its past decisions on whether software-related inventions are patentable. In particular, it is clear that a programmed general purpose computer must be regarded as a specialized piece of hardware both for determining whether a claim is drawn to statutory subject matter and when determining whether the invention is novel and nonobvious. It is also clear that the ‘mathematical algorithm’ exception to statutory subject matter first discussed by the Supreme Court in Benson is limited to abstract mathematical concepts, not mathematics applied to a practical application. Machines, even though they carry out mathematical operations, are patentable.

This really did not differ substantially from the Patent Office’s practice. The time was long past when the Office rejected an application just because it was a software-related invention. There were over 10,000 patents that could be considered software-related at the time of Alappat. But the Office position had swung back and forth on the patentability of software-related inventions. Alappat restricts the Patent Office from treating software-related inventions more strictly under Section 101 than other inventions.

Would it have been wise for the SFLC to have called attention to the Alappat ruling and its consequences, and asked the US Supreme Court for further clarification of its rulings on mathematical algorithms, in light of the First Amendment? Hasn’t the US Supreme Court ruled on the idea that a general purpose computer when programmed with a mathematical algorithm is a special purpose machine for the purposes of patentability, even if the machine does nothing other than execute the algorithm faster than can be done by hand? Hasn’t the US Supreme Court ruled on what is a “practical application” in relation to the claims of a patent, in particular, whether an idea for a practical application is patentable if the only novel part of the idea is the execution of a mathematical algorithm on a general purpose computer? Should the SFLC have reminded the US Supreme Court more strongly about such rulings and what their bearing should have been on Alappat, let alone Bilski?


Follow

Get every new post delivered to your Inbox.