Making A Deep Dream Twitter Bot

ogoodness

Noah

Posted on March 1, 2021

Making  A Deep Dream Twitter Bot

Let's get into it!

This will just be a quick walkthrough of how I made a twitter bot that can grab input from replies, generate a image from the text, and then pass that image through some Trippy deep dream filters.

This tutorial will assume that you are working inside of Google Colab

GitHub: https://github.com/OGoodness
Twitter: https://twitter.com/NoahFields_
Bot Twitter: https://twitter.com/WeDeepDream

Steps:

  1. Getting Setup Inside of Google Collab
  2. Creating API Keys
  3. Code
  4. PROFIT!

One of my favorite outputs:

Harmful Candy

Code!

First, we need to make sure Google Colab is using a compatible version of Tensorflow, you need to specify this before your imports:

%tensorflow_version 1.x
Enter fullscreen mode Exit fullscreen mode

Imports

# Install Lucid and other Dependencies
!pip install --quiet lucid==0.0.5
!pip install requests
%tensorflow_version 1.x
# Import libraries
from google.colab import files
import numpy as np
import tensorflow as tf
import scipy.ndimage as nd
import random, cv2, json, os, requests, string, sched, time
from google.colab.patches import cv2_imshow
from PIL import Image

# Deep Dream Libraries
import lucid.modelzoo.vision_models as models
import lucid.optvis.objectives as objectives
import lucid.optvis.param as param
import lucid.optvis.render as render
import lucid.optvis.transform as transform
from lucid.misc.io import show, load
from lucid.misc.io.reading import read
Enter fullscreen mode Exit fullscreen mode

Create Model

model = models.InceptionV1()
model.load_graphdef()
Enter fullscreen mode Exit fullscreen mode

Get List of Words

Since you probably wont have a lot of user interaction at first, we are going to pull our input from a wordlist!

word_site = "https://www.mit.edu/~ecprice/wordlist.10000"

response = requests.get(word_site)
WORDS = response.content.splitlines()
Enter fullscreen mode Exit fullscreen mode

Helper Functions

# Convert Images to Expected size/format
def imgToModelSize(arr):
  W = model.image_shape[0]
  w, h, _ = arr.shape
  s = float(W) / min(w,h)
  arr = nd.zoom(arr, [s, s, 1], mode="nearest")
  w, h, _ = arr.shape
  dw, dh = (w-W)//2, (h-W)//3
  return arr[dw:dw+W, dh:dh+W]

@objectives.wrap_objective
def dot_compare(layer, batch=1, cossim_pow=0):
  def inner(T):
    dot = tf.reduce_sum(T(layer)[batch] * T(layer)[0])
    mag = tf.sqrt(tf.reduce_sum(T(layer)[0]**2))
    cossim = dot/(1e-6 + mag)
    return dot * cossim ** cossim_pow
  return inner

#Get DeepAI Image from Input Text
def get_generated_image_url(input):
  headers = { "api-key": "DEEPAI API KEY"}
  url = "https://api.deepai.org/api/text2img"
  data = { "text": input.encode('utf-8') }
  r = requests.post(url, data=data, headers=headers)
  json = r.json()
  imageUrl = json["output_url"]
  return imageUrl
Enter fullscreen mode Exit fullscreen mode

Twitter Interface

Functions to interact with Twitter, decently simple, just need to get the Twitter API Key

def get_tweet(url):
    tweet_id = url.split('/')[-1]
    api = get_api()
    tweet = api.get_status(tweet_id)
    return tweet

def get_api():
    # Authenticate to Twitter
    auth = tweepy.OAuthHandler("CONSUMER KEY", "CONSUMER SECRET")
    auth.set_access_token("TOKEN KEY", "TOKEN SECRET")
    # Create API object
    api = tweepy.API(auth, wait_on_rate_limit=True)
    return api

def get_replies(api, tweet):
    tweet_id = tweet.id
    user_name = tweet.user.screen_name
    max_id = None
    replies = []
    cursor = tweepy.Cursor(api.search, q='to:{}'.format(user_name),
                                since_id=tweet_id, max_id=max_id, tweet_mode='extended').items()
    iter = 0
    for reply in cursor:
      iter += 1
      if (reply.in_reply_to_status_id == tweet_id):
          replies.append(reply)
      max_id = reply.id
      if (iter > 100):
        break
    return replies
Enter fullscreen mode Exit fullscreen mode

Image Processing

This is where the fancy part of the app comes in, where you actually process the image nd convert it into a weird nightmare creature!

def feature_inversion(img=None, layer=None, n_steps=512, cossim_pow=0.0):
  with tf.Graph().as_default(), tf.Session() as sess:
    img = imgToModelSize(img)

    objective = objectives.Objective.sum([
        1.0 * dot_compare(layer, cossim_pow=cossim_pow),
        objectives.blur_input_each_step(),
    ])

    t_input = tf.placeholder(tf.float32, img.shape)
    param_f = param.image(img.shape[0], decorrelate=True, fft=True, alpha=False)
    param_f = tf.stack([param_f[0], t_input])

    transforms = [
      transform.pad(8, mode='constant', constant_value=.5),
      transform.jitter(8),
      transform.random_scale([0.9, 0.95, 1.05, 1.1] + [1]*4),
      transform.random_rotate(range(-5, 5) + [0]*5),
      transform.jitter(2),
    ]

    T = render.make_vis_T(model, objective, param_f, transforms=transforms)
    loss, vis_op, t_image = T("loss"), T("vis_op"), T("input")

    tf.global_variables_initializer().run()
    for i in range(n_steps): _ = sess.run([vis_op], {t_input: img})

    result = t_image.eval(feed_dict={t_input: img})
    return Image.fromarray((result[0] * 255).astype(np.uint8))

Enter fullscreen mode Exit fullscreen mode

Here we are loading the provided image and running it through a series of different algorithms.
Each one provides a slightly different output, they get trippier as you go along.
Once we have generated each image, we want to combine the images into a collage, so that it is easier to consume on Twitter. That's what the h_stack and v_stack is being used for.

def get_deep_images(imageUrl):
  sourceImage = load(imageUrl)
  collage = []
  layers = ['conv2d%d' % i for i in range(0, 3)] + [ 'mixed3a', 'mixed3b',
                                                     'mixed4a', 'mixed4b', 
                                                     'mixed4c', 'mixed4d', 
                                                     'mixed4e', 'mixed5a', 'mixed5b']
  h_stack = []
  v_stack = []
  image_iter = 0
  for layer in layers:
    image_iter += 1
    processed_image = feature_inversion(sourceImage, layer=layer)
    h_stack.append(processed_image)
    if (image_iter % 3 == 0):
      image_iter = 0
      v_stack.append(np.hstack(h_stack))
      h_stack = []
  if len(h_stack) > 0:
    v_stack.append(np.hstack(h_stack))
  collage = np.vstack(v_stack)

  v_stack = []
  for cossim in [0.0, 0.5, 1.0, 2.0]:
    processed_image = feature_inversion(sourceImage, layer='mixed4d', cossim_pow=cossim)
    v_stack.append(processed_image)
  h_stack = np.vstack(v_stack)
  collage = np.hstack([collage, h_stack])
  Image.fromarray((sourceImage * 255).astype(np.uint8)).save("source.png")
  Image.fromarray(collage).save("image.png")
  return collage
Enter fullscreen mode Exit fullscreen mode

The Actual App

Now that we have the code to generate our image, process the image, and merge it into a collage. Let's put that all together and make our app work!
Currently, we are just generating the text from a random list of words, but you can use the commented section to choose a random tweet on a profile and grab replies from that.
Once we have our text, we call the above functions to generate and image, save the image, and we finally use these images in our tweet!

def run_deep_dream_comment_search(api):
  iter = 0
  #Here we are randomly generating the words
  reply_text = random.choice(WORDS) + " " + random.choice(WORDS)

  # Alternitively, take words from user reply: 
  # all_user_tweets = get_user_tweets(api, "WeDeepDream")
  # while (iter < 100 and all_replies == []):
  #   iter += 1
  #   random_tweet = random.choice(all_user_tweets)
  #   all_replies = get_replies(api, random_tweet)
  #   print "Iter: " + str(iter)
  # if (iter < 100):
  #   random_reply = random.choice(all_replies)
  #   reply_text = random_reply.full_text
  #   reply_user = random_reply.user.screen_name
  # api.update_status(status='@' + str(reply_user) + " Said: " + str(reply_text))
  # + str(reply_user) + " "

  print "Generating Images from text"
  gen_image_url = get_generated_image_url(reply_text)
  collage = get_deep_images(gen_image_url)

  # Upload images and get media_ids
  filenames = ["source.png", "image.png"]
  media_ids = []

  for filename in filenames:
      res = api.media_upload(filename)
      media_ids.append(res.media_id)

  api.update_status(status='Image generated from string: '  + str(reply_text[0:30]) + "https://github.com/OGoodness", media_ids=media_ids)
  print "Just tweeted"
  for filename in filenames:
      try:
        os.remove(filename)
      except OSError:
        print "Whoops!"
        pass

Enter fullscreen mode Exit fullscreen mode

Run Repeatedly

Now that we have everything working, lets set up the collab to continuously run!
Eventually, the Collab WILL time out, but you can find ways around this, for now, a few minutes will work as a Proof Of Concept.

s = sched.scheduler(time.time, time.sleep)
api = get_api()
def do_something(sc): 
    print "Doing stuff..."
    run_deep_dream_comment_search(api)
    s.enter(600, 1, do_something, (sc,))

s.enter(10, 1, do_something, (s,))
s.run()
Enter fullscreen mode Exit fullscreen mode

Example Outputs

Base Michael

Finds Panel

Rabbit Ellen

💖 💪 🙅 🚩
ogoodness
Noah

Posted on March 1, 2021

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related

Making  A Deep Dream Twitter Bot
twitter Making A Deep Dream Twitter Bot

March 1, 2021