600 On The 3Sat

Boolean satisfiability is a classic problem in computer science. Given a series of n boolean variables, A B C ... and a formula in 3-conjunctive normal form

((¬ A ∨ B ∨ C) ∧ (D ∨ ¬ C ∨ E) ...)

This clause would be read Both "not A or B or C" and "D or not C or E" The goal is to find values for A B C that makes the formula true. For example, in the toy example, if both B and D are true, any other assignment works for A, C, and E. The Cook-Levin theorem shows shows that this problem is so hard that an efficient general solution to this problem would solve a host of other "NP complete problems" Knowing how hard this problem is in general makes it even more shocking that there exist practical 3sat solvers that work on instances with 100s if not 1000s of variables.

Through focused diligent research, CDCL (clause driven conflict learning) methods are the state of the art methods for solving 3Sat, and it's truly marvelous, that humans even have a shot at understanding this problem. However, every once in a while there are armchair mathematicians that seem to believe, that they can do better. P = NP, and their 20 line algorithm proves it.

600 On the 3Sat is a facetious exploration of these less than theoretically motivated approaches, and an analysis of how wrong these approaches are compared to the state of the art clause driven conflict learning solver. (repo)

The code used to display the table is adapted and reproduced with an MIT License from this codepen

We tried a number of optimization strategies: For each value of n we ran a few repeats of each algorithm on a few different random instances of 3sat with n variables and 4.5 * n (chosen within the "critical region" for random 3Sat") clauses. Each clause consisted of 3 random variables with a .5 chance of negating any variable. The column marked with the strategies name was the average response of the strategy, 1 = Satisfiable, 0 = unsatisfiable, strategy_correct is the fraction of said responses that are correct, and strategy_time was the time it took to receive the answer. Strategies that operated too slowly, were cut off from future trials, so most of this chart has no data on it. Some of the results were shocking, some, not so much. We first, have an empirical proof that P != NP, so I accept wire transfers, or mailed checks for the millenium prize. In Lieu of cash, I accept proofs of other millenium prize problems \s. But, what's really intereesting is that open source integer linear programming methods, are indeed outperformed by specialized SAT solvers. Specialized methods could solve problems of size n < 100 in less than 1 / 20 seconds. While, ILP started timing out well before that. Schonig's algorithm appeared to outperform any hand-rolled algorithm, including the "improvement" suggested by the arxiv paper, my own manual gradient descent type approach to SAT, black box hyperparameter tuning methods, and black box function optimization from scipy. Even worse, is that for problem sizes that any of these in-house methods were able to solve, simply brute forcing the O(2^n) inputs was faster. The lessons here are three-fold: don't reinvent the wheel. There is no "strong evidence" for the correctness of an algorithm without either a certified peer reviewed proof, or a real implemetation of the algorithm. Lastly, if you do feel the need to reinvent the wheel. Don't get too creative.

Time as a function of number of variables for each strategy until time out was reached.

Fraction of instances solved by each method

CDCL is a complete and sound method, so the canonical solver line is also the number of solvable instances.

import random
import pandas as pd
import collections
import time
import pysmt
from pysmt.shortcuts import Symbol, LE, GE, Int, And, Equals, Plus, Solver, Or, Iff, Bool, get_model
from pysmt.typing import INT
from mip import *
from functools import reduce
from itertools import combinations
from operator import mul
from scipy.optimize import minimize
from hyperopt import fmin, tpe, space_eval, hp

critical_ratio = 4.4
MIN_N = 3
MAX_N = 1000

# https://www.cs.ubc.ca/~hoos/SATLIB/Benchmarks/SAT/RND3SAT/descr.html#:~:text=One%20particularly%20interesting%20property%20of%20uniform%20Random-3-SAT%20is,systematically%20increasing%20%28or%20decreasing%29%20the%20number%20of%20kclauses
# We vigorously handwave the phase transition for 3sat

Benchmark various free ways to solve 3sat

def create_random_ksat(num_variables, num_clauses, k = 3):
    Return a random 3sat clause with num_variable  number of variables and num_clauses clauses

    :param num_variables:
    :param num_clauses:
    :param k: k in ksat
    :return: A list of K-tlists of variables. Each variable is tuple contains a pair, which is an integer (the name of the variable)
    and whether or not it is negated. This is a 3sat clause,in CnF
    def valid(clause):
        return len(set(var for var, _ in clause)) == len(clause)
    def create_clause():
        while True:
            clause = tuple((random.choice(range(num_variables)), random.random() < .5) for i in range(k))
            if valid(clause):
                return clause

    clauses = set()
    while len(clauses) < num_clauses:
        new_clause = create_clause()
        while new_clause in clauses:
            new_clause = create_clause()

    return list(clauses)

def evaluate(cnf, variables):
    return all(any(variables[name] == val for name, val in clause) for clause in cnf)

def get_num_symbols(sat_instance):
    return max(max(tup[0] for tup in clause) for clause in sat_instance) + 1

def canonical_solver(sat_instance):
    Reference solver. Assume complete and sound.
    :param sat_instance:

    num_symbols = get_num_symbols(sat_instance)
    symbols = [Symbol(str(i)) for i in range(num_symbols)]
    domains = [Or([Iff(Bool(is_true), symbols[variable]) for variable, is_true in clause]) for clause in sat_instance]

    formula = And(domains)

    model = get_model(formula)
    if model:
        return True
        return False

def assignment_from_num(i, num):
    return [bool((i >> index) & 1) for index in range(num)]

def nonconvex_local(sat_instance):
    n = get_num_symbols(sat_instance)

    def cost(x):
        return [sum(
            min(int(1 - x[variable]) if is_true else int(x[variable]) for variable, is_true in clause)
            for clause in sat_instance

    results = []

    for i in range(10):
        start = [int(random.random() < .5) for i in range(n)]
        result = minimize(cost, start, bounds = [(0, 1) for i in range(n)])

        guessed_output = [int(a >= 0.5) for a in result.x]

        results.append(evaluate(sat_instance, guessed_output))

    return any(results)

def hyperopt(sat_instance):
    n = get_num_symbols(sat_instance)
    def cost(x):
        return sum([
            min(int(1 - x[variable]) if is_true else int(x[variable]) for variable, is_true in clause)
            for clause in sat_instance
    c = 2.1
    best = fmin(fn=cost,
                space=[hp.randint('x' + str(i), 0, 2) for i in range(n)],
                max_evals = 2 * int(n ** c))

    return cost(list(best.values())) < 1

def brute_force(sat_instance):
    Solve in Exponential time. For fun. O(c * (2 ** n))
    :param sat_instance:
    def all_instances(num):
        for i in range(2 ** num):
            yield assignment_from_num(i, num)
    return any(evaluate(sat_instance, s) for s in all_instances(get_num_symbols(sat_instance)))

def do_benchmark() -> pd.DataFrame:
    solution_strategies = {"canonical":canonical_solver, "ilp": do_cbc_solver, "schonig": schonig,
                           "crank_algorithm": crank_algorithm, "local_sat": local_sat,
                           "brute_force": brute_force, "nonconvex_local": nonconvex_local, "hyperopt": hyperopt}
    hit_cutoffs = set()
    ns = [int(a / REPEATS) for a in range(MIN_N * REPEATS, MAX_N * REPEATS, 1)]
    cols = collections.defaultdict(list)

    for n in ns:
        new_row = dict()
        new_row["n"] = n
        instance = create_random_ksat(n, int(n * critical_ratio))
        for solution_name, solution in solution_strategies.items():
            start = time.time()
            new_row[solution_name] = solution(instance) if solution_name not in hit_cutoffs else False
            end = time.time()
            new_time = end - start
            new_row[solution_name + "_time"] = new_time
            if new_time > TIMEOUT:
        right_solution = new_row["canonical"]

        for solution_name in solution_strategies:
            if solution_name in hit_cutoffs:
                new_row[solution_name + "_correct"] = False
                new_row[solution_name + "_correct"] = right_solution == new_row[solution_name]

        for key in new_row:

    return pd.DataFrame(cols)

def do_cbc_solver(sat_instance):
    n = get_num_symbols(sat_instance)

    m = Model("knapsack", solver_name = CBC)

    x = [m.add_var(var_type=BINARY) for i in range(n)]

    for clause in sat_instance:
        m += xsum(x[var] if is_true else 1 - x[var] for var, is_true in clause) >= 1

    status = m.optimize()

    return status == OptimizationStatus.OPTIMAL or status == OptimizationStatus.FEASIBLE

def schonig(sat_instance):
    Schonig's algorithm
    :param sat_instance:
    n = get_num_symbols(sat_instance)

    def attempt_greedy_walk():
        randomized_assignment = [random.random() < .5 for i in range(n)]
        c = len(sat_instance)
        for i in range(5 * c):
            evaluation = evaluate(sat_instance, randomized_assignment)
            if evaluation:
                return True
            for clause in sat_instance:
                if not evaluate([clause], randomized_assignment):
                    var, _ = random.choice(clause)
                    randomized_assignment[var] = not randomized_assignment[var]

        return False

    return any(attempt_greedy_walk() for i in range(10))

def local_sat(sat_instance):
    Gradient descent esque sat, with some simulated annealing. Should be worse than schonig better better than the crank
    Schonig's algorithm
    :param sat_instance:
    n = get_num_symbols(sat_instance)

    map = [0] * n

    for clause in sat_instance:
        for variable, is_true in clause:
            map[variable] += 1 - (2 * is_true)

    def attempt_greedy_walk():
        randomized_assignment = [random.random() < .5 for i in range(n)]
        c = len(sat_instance)
        for i in range(5 * c):
            evaluation = evaluate(sat_instance, randomized_assignment)
            if evaluation:
                return True
            for clause in sat_instance:
                if not evaluate([clause], randomized_assignment):
                    var, _ = max(clause, key = lambda tup: ((map[tup[0]] if not randomized_assignment[tup[0]] else -map[tup[0]]), random.random()))
                    randomized_assignment[var] = not randomized_assignment[var]

        return False

    return any(attempt_greedy_walk() for i in range(30))

def crank_algorithm(sat_instance):
    If this works the following author is a millionare, and P = BPP
    :param sat_instance:

    n = get_num_symbols(sat_instance)
    # M is some free parameter less than n, lets fix arbitrarily
    M = n - 1
    # For some reason M is assumed to be even
    if M % 2:
        M = M - 1

    M = 4

    current_assignment = [int(M / 2) for i in range(n)]

    def evaluate_fractional_clause(clause, variables):

        k = len(clause)
        out = 0
        for subset in range(1, k + 1):
            mult = (-1) ** (subset + 1)
            for combo in combinations(range(k), subset):

                out += reduce(mul,(((variables[clause[i][0]]) if clause[i][1] else (M - (variables[clause[i][0]] / M))) for i in combo)) * mult / (M ** subset)

        return out

    def worst_clause_and_val():
        return min(((clause, evaluate_fractional_clause(clause, current_assignment)) for clause in sat_instance), key = lambda a: (a[1], random.random()))

    for i in range(20 * n * n * M * M):
        assert all(var <= M for var in current_assignment)
        worst_clause, worst_clause_truth_value = worst_clause_and_val()
        if worst_clause_truth_value == 1:
            return True
            random_var, _ = random.choice(worst_clause)
            increments = {0.0: [1], M: [-1]}
            increment_choice = random.choice(increments.get(current_assignment[random_var], [1, -1]))
            current_assignment[random_var] += increment_choice

    return False

benchmark_df = do_benchmark()

# print(do_cbc_solver(create_random_ksat(10, 100)))

benchmark_df.to_csv("data", index = False)
benchmark_df.groupby("n").mean().to_csv("data_grouped", index = True)

pd.set_option("display.max_rows", None, "display.max_columns", None, "display.width", 1000)

# print(benchmark_df)

Abstract Nonsense

"Abstract Nonsense" is a somewhat loving, but somewhat derisive term for methods (typically Category Theoretic methods) in pure mathematics that are unreasonably convoluted and involve a lot of theoretical machinery. I myself am awful at Category theory but excellent at abstract nonsense, and I wanted a space to share my thoughts and projects. I'm well aware that very few people will read this blog, but to me this space is a journal. A respite from the giants that control the web, and a space to share my thoughts into the void, in a way I can control and moderate.

More concretely, I hope to maintain "Abstract Nonsense" as a dev log as sorts. Not because I think it showcases phenomenal technical talent, but because it showcases some of the cool things I've been learning on the side.

I'll keep my first entry on this journal quite short. This entry stands well on its own. Because it does something the category theorist in all of our hearts would love.

It's self referential.

The content engine that runs Abstract nonsense is quite brilliant if I do say so myself. It is a python script tha takes in a series of html files, and agglomerates them into a single file.

In addition to the abstract nonsense engine I have two other python scripts that form the backbone of this (static) website. I have a script that takes in plaintext of a quote document I have been personally maintaining for the past 3 years. It uses regular expressions to parse out the quotes and build an html file that contains java-script that builds a dynamic webpage this java script program alters the html on the page to create a typing effect. Check it out here! The final piece of this beautiful infrastructure is a third script that runs both scripts than commits the whole branch to master.

As I learned on Twitter/Reddit/The Quote Document: "Everybody has a testing environment. Some people are lucky enough enough to have a totally separate environment to run production in." - @stahnma Abstract nonsense and this website as a whole is both test and prod. Maybe one day, I'll be a good enough engineer to be able to invest in a test and prod for my website.

GPT 9001/Abstract Nonsense

This "blog" is called "Abstract Nonsense" because of this project. Most language models try to build interesting output, but end up spouting abstract nonsense (with or without some semantic correctness). Well, I thought to myself, I have a corpus that itself is really just abstract nonsense, maybe I could train an NLP transformer model on this corpus, and oddities of syntax, would actually be a feature!

Because the robot is confused, it will also be named Abstract Nonsense, to maximize perplexity with respect to the identically named blog hosted on this site

I present to you GPT 9001! Which is really just a fine tuned version of GPT 2 tuned for text generation on the Quote Doc In this project I learned that hand-rolled models that I can quickly train are trash. For example, the first implementation of GPT 9001, was called GPT0, and was just some LSTM model I spun up and trained on the quote doc, the LSTM model could either predict random words or overfit the training set. It couldn't do anything of interest :(.

Anyway, without further ado here s/he is:

Here GPT 9001 is reflecting on the repetitive nature of training deep neural networks. Quite introspective, and certainly not just a chance occurrence! Click the image or this sentence to see more brilliancies from GPT 9001 Please understand that the writings on this page are that of an AI and despite a good faith set of filters might be unsettling, mildly profane, or nonsense. GPT was pre-trained on a corpus of data, so a name appearing in its output does not necessarily mean I know this person. The model also predicted authors. These predictions are quite funny especially if you know the people, but to avoid people mistaking satire for reality, I will not show the names. Update: new and improved model (GPT 3 based)

Pats for a good floofer!

This update is a quick one.

I learned that this floofer needed some head pats, and I had to help!

This is an important cause, so feel free to compile and run the following java script (not javascript fortunately) to help out the floofer.

Help the floofer with this script!
package none;

import java.awt.Color;
import java.awt.Desktop;
import java.awt.MouseInfo;
import java.awt.Point;
import java.awt.Robot;
import java.awt.event.InputEvent;
import java.net.URI;
import java.util.Calendar;
import java.util.Random;
import java.util.logging.Level;
import java.util.logging.Logger;

 * @author rohan
public class PetFloofer {

	 * @param args the command line arguments
	public static void main(String[] args) throws Exception {
		int NUM_PETS = 25;
		int pause = 334;
		Robot robot = new Robot();
		int steps = 334;
		int startX = 780;
		int endX = 1150;
	    double stepSize = ((double) (endX - startX) / steps);
		for(int i = 0;i<=NUM_PETS-1;i++){
            for(int step_num = 0;step_num<=steps-1;step_num++){
            int x = (int) (startX + step_num * stepSize);
            robot.mouseMove(x, 400);


Execution instructions
        javac PetFloofer.java && java PetFloofer.java
Download the source here!

pip install pandas

Pandas is a leading library in the field of data science, when you utter the magic words:

pip install pandas
You get a powerful library to explore, analyze, and transform massive data sets using expressive syntax.

You even get chills as you type the commands into your terminal:
Magical application
python my_magic_app.py

Your app runs, it does the big data, but still... somethings missing... you can't put your finger on it....

You are missing ACTUAL pandas!! Wouldn't it be great if these magic spells also summon the animals whose name you invoke? Introducing: pipinstallpandas.py! Whenever you invoke the name (or even mention it in any context of an animal to do your magical programming work, this script will also show you some pictures of the cuties to help you properly express your gratitude some example commands to try in your terminal. Typing other words will show you a surprise.
Supported commands

python pipinstallpandas.py # start the logger
conda install tensorflow
pip install pandas
python my_amazing_app.py >> output_log.txt
cat output_log.txt
Without further ado: here is the script:
from pynput.keyboard import Key, Listener
import time
import random

from selenium import webdriver
from selenium.webdriver.common.keys import Keys

mutable_st = []

from subprocess import Popen, check_call

def check(mutable_st, key):
    l = len(key)
    return len(mutable_st) >= l and all(mutable_st[i - l] == key[i] for i in range(len(key)))

def open_then_close(url, time_to_sleep = 3):
    browser = webdriver.Chrome()



def on_press(key):
    do_nothing_function = lambda : None

    def get_random_picture_of(thing):
        adjectives = ["cute", "cuddly", "floofy", "soft",
                      "adorable", "big", "aww", "safe",
                      "happy", "sad", "tame"]
        adjectives_to_use = []
        for adjective in adjectives:
            if random.random() < .5:
        query_words = adjectives_to_use + [thing]
        query = "+".join(query_words)

    quit = lambda : exit(0)
    keywords = {"python": lambda : get_random_picture_of("pythons"),
                "conda": lambda : get_random_picture_of("cartoonish plush snake"),
                "pandas": lambda : get_random_picture_of("pandas"),
                "floof": lambda : get_random_picture_of("floofers"),
                "dog": lambda : get_random_picture_of("dog"),
                "cat": lambda : get_random_picture_of("cat"),
                "fuck": lambda : get_random_picture_of("great alaskan malamute"),
                "shit": lambda : get_random_picture_of("giant flemish rabbit"),
                "sad": lambda : get_random_picture_of("hug"),
                "leavelogger": quit}

    for key, function in keywords.items():
        if check(mutable_st, key):

with Listener(on_press=on_press) as listener:
Download the script here. Formatting for code borrowed from this Code Pen See on Github