A Dirt Simple Algorithm for a Reflexive Generalized Inverse

let A be a m by n matrix considered as a list of columns construct B, an n by m matrix (considered as a list of rows), by elimination such that BAB = B, ABA = A and AB an orthogonal projection onto the image of A

in exact artihmetic and pythonesque psuedo-code:

B = array(A)
for k,a in enumerate(A):
    v = B[k] # pivot row
    va = inner(v,a)
    if va: # note that v nonzero if and only if va positive
        v *= 1/va # scale on <a>
        for j,b in enumerate(B):
            if j == k: continue
            b = b - inner(b,a)*v # elimination on <a>
# following routines require you to keep track of whether matrices are
# represented as a list of row vectors or a list of column vectors

def invert(A,MatrixDecoration=lambda(_):_,epsilon=1e-6):
    """from A, an m by n matrix of columns,
       construct B, an n by m matrix of rows, by elimination
       B is a reflexive generalized inverse of A
       where AB is a orthogonal projection onto Im A"""

    # * as a special case if rank A = n then BA = I and B is the
    #   Moore-Penrose Generalized Inverse which is unique
    # * if rank A < n then this won't in general be Moore-Penrose inverse
    #   * nonetheless if given y and the regression model y^ = A b
    #     then b = B y will be such that ||y - y^|| is minimized
    #     * one of many equally good least-squares estimates for b

    # note the similarity to Gram-Schmidt Orthogonalization
    # * the pivot row (as below v) (before scaling and later eliminations)
    #   is identical to that produced by G-S Orthogonalization
    # * in particular the following invariant
    #   * inner product (v * a) == (v * v) where v is the pivot row
    #   * in python expressions:
    #       sum(v_*a_ for v_,a_ in zip(v,a)) == sum(v_*v_ for v_ in v)
    #   * follows from the elimination considered as a projection on the
    #     subspace orthogonal to all previously dealt with columns
    #   * Note that this invariant applied to the second pivot is
    #     the Schwartz Inequality

    # note the radical simplicity of the algorithm
    # * pivot rows are predetermined
    # * the only decisions made are whether to consider a pivot row as
    #   linearly independent or dependent of previous rows
    #   * a decision of rank
    #   * whether a nonnegative numerical computation is positive or zero

    B = MatrixDecoration([list(a) for a in A]) # deep copy of A

    for k,a in enumerate(A):
        v = B[k] # pivot row

        va_s = [v_*a_ for v_,a_ in zip(v,a)]
        va = sum(va_s)
        vunits = max(abs(va_) for va_ in va_s)

        if va > epsilon*vunits:
            v[:] = [1.0/va*v_ for v_ in v] # scale on <a>

            for j,b in enumerate(B):
                if j == k: continue

                ba = sum(b_*a_ for b_,a_ in zip(b,a))
                b[:] = [b_ - ba*v_ for b_,v_ in zip(b,v)] # elimination on <a>

        else:
            v[:] = [0 for v_ in v] # numerically collapse to true zero

    return B