It's a long time since I worked on this kind of problem, so please bear with me.
I have an approximate inverse matrix that I'm using as a preconditioner to solve the conjugate gradient method. Unfortunately the matrix is much more dense than the storage I have available and reducing it to a much sparser matrix results in a far less optimal solution.
Here is a subset of one of the matrices being produced to get an idea of the arrangement and layout of the data:
10.880 1.917 1.979 0.669 0.311 0.000 0.000 0.000
1.917 10.871 0.651 1.924 0.000 0.000 0.000 0.000
1.979 0.651 11.596 2.159 2.038 0.330 0.000 0.000
0.669 1.924 2.159 11.253 0.350 0.000 0.000 0.000
0.311 0.000 2.038 0.350 11.221 1.972 0.320 0.311
0.000 0.000 0.330 0.000 1.972 11.171 1.862 1.810
0.000 0.000 0.000 0.000 0.320 1.862 10.550 0.310
0.000 0.000 0.000 0.000 0.311 1.810 0.310 10.550
0.322 0.059 2.062 0.694 0.673 0.057 0.000 0.000
0.112 0.331 0.736 1.990 0.123 0.011 0.000 0.000
0.000 0.000 0.696 0.063 1.934 0.321 0.000 0.000
0.000 0.000 0.239 0.342 0.343 0.000 0.000 0.000
0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
Now it looks to me that this kind of matrix could be fairly well approximated in fourier space, and by using a wavelet transform I could store a lot more of the data without resorting to a sparse matrix format where I cull the data that doesn't exceed a certain value.
Does that sound reasonable looking at the example dataset? How would I go about doing this, what existing techniques are involved and what resources should I start looking at to help me become familiar with this area?
I'm interested in using the least amount of storage space to store the best approximation to these matrices.
No comments:
Post a Comment