When there are many world info entries, some naturally have to be shafted to fit the context size. Currently, the model shortens and then deletes the longest entries first. We could add a prioritization component to these entries so that people could tell the AI the most important things to keep in the 2048 token context.