But what makes these instances "easy" and "hard"? A convenient answer is "it depends on the solver algorithm", but it seems to be a more universal notion.
For example, there is a well-known phase transition in the case of the randomly generated 3SAT instances: suppose we generate k clauses over n variables, each having three literals. Now,
- if k is less than 4.25n (approximately), then the generated formula is satisfiable with high probability due to underconstraintedness -- and most solvers can generate a solution within a low runtime;
- if k is more than 4.25n (again, approximately), then the generated formula is whp overconstained and most solvers can deduce a contradiction within a low runtime,
- however,
- if k is around 4.25n, then deciding whether the formula is satisfiable or not is hard for all the solvers available. (that's why benchmarks and testbeds are generated exactly this way, usually).
Research in this direction can be used to have a better understanding for the structure of NP-hard problems.
Thus it's probably very, very hard.
Any willing BSc / MSc student there? ^_^
No comments:
Post a Comment