Salesforce has released CTRL, a 1.6 billion-parameter conditional transformer language model which allows users to have more control over text generation. It is trained to use control codes that specify domain, subdomain, entities, relationships between entities, dates and task-specific behaviour derived from structure that appears with raw text; meaning the advantages of unsupervised learning are preserved without losing explicit user control. The tool currently supports two functionalities: generating from a trained model – two models can be downloaded for this purpose – one with a sequence length of 256 and another with 512; both are word-level vocabularies and allow for the generation of longer sequences than their original lengths through sliding window approach. The other function is source attribution which given a prompt will display the perplexity of the prompt based on each domain control code (see Section 5 in paper).