hplll is an experimental C++ library, companion to fplll.

Download: newest master master.zip / GitHub release / Older versions.

Problem and bug reports may be sent to gilles.villard@ens-lyon.fr

Current master branch developed with macOS 10.12 with the compilers

# clang++-mp-5.0 5.0.2 (seems to give better performances by default) # MacPorts gcc8 8.2.0_3 ./configure ./configure --enable-omp [other options, see below]

- GNU MP (6.1.2): http://gmplib.org

- MPFR (4.0.1)): http://www.mpfr.org

- FPLLL (5.2.1, or master branch): https://github.com/dstehle/fplll

**Optional (highly recommended) library for fplll and hplll, searched by default **:

- QD (2.3.17): http://crd-legacy.lbl.gov/~dhbailey/mpdist, see fplll options.

**Optional library for hplll**:

- NTL (used with 10.5.0): http://www.shoup.net/ntl

./autogen.sh ./configure make make install

**OpenMP option**:

./configure --enable-omp

If gmp, mpfr or hplll are installed under non-standard directories you can specify their locations by using configure with the option --with-gmp, --with-mpfr, --with-qd, --with-fplll. For ntl also (optional), you can use --with-ntl. By default, make install installs files under /usr/local, you can specify another installation directory by giving ./configure the option --prefix=dirname. Run ./configure --help for further details.

**128 bits floating points**:

Tested by default. See Shi Bai branch in fplll.

Another example:

./configure --with-gmp=/local/gmpdir/ --with-fplll=/fpllldir --prefix=$HOME/soft make make install

make check

An additional check, not included by default, is a bit more time consuming:

cd tests make relationstesthuge ./relationstesthuge

using namespace hplll #include <hplll.h>

The interfaces and data types Z_NR<.> for integers and FP_NR<.> for floating-point numbers are inherited from fplll, and can be used with several ground types:

Z_NR<long>, Z_NR<mpz_t>, Z_NR<__int128_t> (draft, not in fplll) FP_NR<double>, FP_NR<long double>, FP_NR<dpe_t>, FP_NR<ldpe_t> (draft, not in fplll), FP_NR<mpfr_t>

The fplll class ZZ_mat is used for lattice bases.

The classes

template<class T> class matrix template<class T, class DPET> class MatrixPE

are used internally in hplll codes, and as template parameters for lattice classes.

**Nota bene.** fplll lattice bases are formed by rows of matrices, for the moment in hplll
the bases are formed by **columns of matrices**. Transposition is available, for example:

ZZ_mat<long> A,B; A.resize(d,n); B.resize(n,d); transpose(B,A); // B:=A^T

The fplll utility latticegen and associated library functions can be used for generating special lattice bases.

The function

void command_line_basis(ZZ_mat<mpz_t>& A, int& n, int& d, double& delta, int argc, char *argv[])

can be included in a code for constructing special bases at running time:

ZZ_mat<mpz_t> A; int n,d; double delta; command_line_basis(A, n, d, delta, argc, argv); // Constructs A using main parameters

Possibilities include:

main -type r -d 40 -bits 244 // Knapsack (d+1) x d matrix -type t -d 40 -alpha 1.4 // Ajtai d x d matrix -type al -d 240 // Seysen tests d x d bases ...

After installation, you main try to compile the file "gen.cc" and run "gen" in the directory examples using:

make -f make_ex gen; gen -type r -d 8 -bits 40 gen -type al -d 20

Note that you may have to modify the make file "make_ex" according to your software installations (especially if not standard as root) before compiling for finding include files and libraries.

A lattice is constructed using the class

template<class ZT, class FT, class MatrixZT, class MatrixFT> class Lattice

One of the constructors is:

Lattice(ZZ_mat<ZT> A, bool forU=NO_TRANSFORM, int reduction_method=DEF_REDUCTION, int long_flag = WITH_LONG) Lattice<.> B(A); forU = NO_TRANSFORM or TRANSFORM // Possible computes the unimodular transformation matrix reduction_method= DEF_REDUCTION or SEYSEN_REDUCTION long_flag = WITH_LONG or NO_LONG // Heuristic optimization during size reduction

Possibilities for the construction include:

// mpz_t or any integer type such as long or __int128_t ZZ_mat<mpz_t> A; // With dense matrices of floating-point numbers Lattice<mpz_t, double, matrix<Z_NR<mpz_t> >, matrix<FP_NR<double> > > B(A); Lattice<mpz_t, long double, matrix<Z_NR<mpz_t> >, matrix<FP_NR<long double> > > B(A); // With column dpe_t matrices à la fplll (floating-point vectors and a common exponent) Lattice<mpz_t, dpe_t, matrix<Z_NR<mpz_t> >, MatrixPE<double, dpe_t> > B(A);

int hlll(double delta); Ex: B.hlll(0.99) // LLL reduction ZZ_mat<ZT> getbase(); Ex: B.getbase(); // Returns the current state of the basis ZZ_mat<ZT> getU(); Ex: B.getU(); // Returns the transform if known

You main try to compile the file "gcd.cc" and run "gcd" in the directory examples using:

make -f make_ex gcd; gcd

Note that you may have to modify the make file "make_ex" according to your software installations (especially if not standard as root) before compiling for finding include files and libraries.

Since in particular Householder orthogonalization induces a higher overhead than Cholesky one, fplll is recommended and should be faster for bases of small and medium dimensions:

Knapsack i7-3840QM CPU @ 2.80GHz |
d=8, 2000 bits | d=8, 2 10^{5} bits |
d=40, 1600 bits | d=100, 10^{4} bits |
d=128, 10^{4} bits |
d=140, 10^{4} bits |
d=160, 1.6 10^{4} bits |

FPLLL wrapper | 4.5 ms | 3 s | 0.37 s | 56 s | 137 s | 197 s | 10.6 mn |

HPLLL | 9.6 ms | 3.7 s | 0.54 s | 59 s | 138 s | 197 s | 8.8 mn |

Ajtai i7-3840QM CPU @ 2.80GHz |
d=8, 2.0 | d=8, 4.0 | d=20, 2.8 | d=60, 1.4 | d=100, 0.8 |

FPLLL wrapper | 1 ms | 1.2 s | 15.4 s | 11 s | 3.2 s |

HPLLL | 2 ms | 1.6 s | 18.4 s | 12 s | 3.6 s |

hplll reduction may be faster than fplll one as soon as the dimension is greater than 160-170. Indeed, for any given floating-point precision used during the reduction, if fplll can reduce bases of dimension as large as d, then hplll should be able reduced bases of dimension at least 2d. When the dimension increases, the fplll wrapper gradually uses more precision.

Knapsack i7-3840QM CPU @ 2.80GHz |
d=200, 20000 bits | d=240, 24000 bits | d=280, 28000 bits |

FPLLL wrapper | 27 mn | 1h17 | ... |

HPLLL | 20 mn | 52 mn | 2h |

Higher dimensions may however be reached with low precision floating-point numbers if the numerical quality (especially the condition number) is better than in the worst case.

NTRU i7-3835QM CPU @ 2.40GHz |
n=227, d=454 | n=421, d=842 | n=613, d=1226 | n=1123, d=2246 | n=1301, d=2602 |

FPLLL wrapper | 92 s | 176 s | 778 s | 50 mn | 1h47 |

HPLLL | 36 s | 104 s | 408 s | 26 mn | 38 mn |

Algorithms are looking for an integer relation between real (floating-point) numbers given by a wrapped mpfr vector, with a priori given precision alpha:

vector<FP_NR<mpfr_t> > fpv; mpfr_set_default_prec(alpha); fpv.resize(d);

A class of tuples may then be used for accessing the relation computation:

template<class ZT, class FT, class MatrixFT> class FPTuple

for example with the constructor

FPTuple(vector<FP_NR<mpfr_t> > fpv);

For constructing a class L of tuples, possibilities include:

FPTuple<long, double, matrix<FP_NR<double> > > L(fpv); FPTuple<long, long double, matrix<FP_NR<long double> > > L(fpv); FPTuple<__int128_t, long double, matrix<FP_NR<long double> > > L(fpv); FPTuple<mpz_t, dpe_t, MatrixPE<double, dpe_t> > L(fpv);

Algorithms will in general use a lifting strategy. Large integers or high precision floating-point numbers are truncated, and LLL is applied recursively on bases with smaller entries (lifting).

-- Integer template type ZT: LLL will run on Z_NR<ZT> bases

-- Floating-point template type FT: LLL will use FP_NR<FT> floating-point numbers

-- Floating-point matrix template type MatrixFT: associated matrix type:

<ZT, FT, MatrixFT> <ZT, double, matrix<FP_NR<double> > > // HPLLL method : no column + exponent <ZT, dpe_t, MatrixPE<double, dpe_t> > // HPLLL method : with column + exponent

Template parameter types must be chosen according to the dimention of the problem (see LLL required precision), to the size of the targeted relation, and to truncation choices when calling the relation computation.

int relation(ZZ_mat<mpz_t> C, long alpha, \ long confidence_gap = 60, long shift = 10, int truncate = -1, int lllmethod = FPLLL, \ int sizemethod=DEF_REDUCTION, double delta = 0.99);

-- Relation(s) will be returned in "C",

-- The "confidence_gap" is a number of bits corresponding to the chance of having found a relation,

-- The "shift" gives the lifting step in bits,

-- The truncation to integers keeps "truncate" bits,

-- LLL is computed using the "lllmethod": either FPLLL or HLLL,

-- "sizemethod" (with HLLL) is either DEF_REDUCTION or SEYSEN_REDUCTION,

-- "delta" is the LLL reduction strength parameter.

Files given for the examples or in the directory tests/collection/relations give the input mpfr precision (in bits), the number of floating-point numbers, and list the latter numbers.

You main try to compile the file "relation.cc" in the directory examples using:

make -f make_ex relation

and run

relation < alpha.in

The 8 numbers a,b,... in the file alpha.in satisfies 8a+7b+...+2f+h ≈ 0.

Note that you may have to modify the make file "make_ex" according to your software installations (especially if not standard as root) before compiling for finding include files and libraries.

Credit to David H. Bailey, Jonathan M. Borwein, Jason Kimberley and Watson Ladd, "Computer discovery and analysis of large Poisson polynomials," Experimental Mathematics, 27 Aug 2016, vol. 26, pg. 349-363, doi and pdf. See also PSLQ timings therein.

With:

FPTuple<long, double, matrix<FP_NR<double> > > L(fpv); L.relation(C, alpha, 30, 20, 40, FPLLL);

Poisson polynomials i7-3840QM CPU @ 2.80GHz |
s=17, d=65 | s=19, d=91 | s=21, d=97 | s=23, d=133 |

HPLLL | 4.53 s | 23.6 s | 31.4 s | 154 s |

Bits used (floating-point precision) |
6200 | 12220 | 13880 | 26300 |

With:

FPTuple<__128_t, double, matrix<FP_NR<double> > > L(fpv); L.relation(C, alpha, 20, 10, 40, HLLL);

Poisson polynomials Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz |
s=31, d=241 | s=52, d=289 | s=37, d=325 |

HPLLL | 2h20 | 5h54 | 25 h |

FPLLL wrapper (Direct LLL, not incremental) |
12h05 | 43h55 | - |

Bits used (floating-point precision) | 87530 | 126010 | 159820 |

Faster but not quite reliable for the moment. See relationstest.cc, relationstesthuge.cc and:

template<class ZT, class FT, class MatrixFT> class FPTuple_f int relation(ZZ_mat<mpz_t>& C, long alpha, \ long confidence_gap = 60, long shift = 200, long increment = 20, \ int lllmethod = FPLLL, int sizemethod = DEF_REDUCTION, double delta = 0.99);

- No wrapper method (almost ready in wrapper.h).
- Only heuristic breaks are implemented when lack of floating-point precision: could lead to infinite loops in rare cases.