KalyanChakravarthy.net

Thoughts, stories and ideas.

Flask custom template loaders

Flask is amazing. The templating system called jinja2 packed with it is equally amazing.

The default behaviour of flask app is to look for template files specified in app.template_folder directory. Although this will work in most use-cases, sometimes you need additional control over this - for example if you have user specific templates or if you want to load templates from a database, dynamically.

This can be accomplished, by initialising app.jinja_loader with a custom Loader.

Jinja2 ships with serveral different loaders by default. The easiest one is DictLoader, which simply loads tempalte sfrom a dictionary.

:::python
from flask import Flask
import jinja2

app = Flask(__name__)
app.jinja_loader = jinja2.DictLoader({
		'index.html' : """
			{% extends 'base.html' %}
			{% block text %}
			Super cool!
			{% endblock %}
		""",

		'base.html' : """
			<b>{{ self.text() }}</b>
		"""

	})
	
@app.route('/')
def doHome():
	return render_template('index.html')

The above example was fairly simple - it still pre-loads all templates. If there is a non-trivial requirement where you want to load them from database, then you can use FunctionLoader

:::python
def load_template(template_name):
	is_uptodate = True
	if template_name == 'index.html':
		return ("""
			{% extends 'base.html' %}
			{% block text %}
			Super cool!
			{% endblock %}
		""", None, is_uptodate)

	if template_name == 'base.html':
		return ("""
			<b>{{ self.text() }}</b>
		""", None, is_uptodate)


app.jinja_loader = jinja2.FunctionLoader(load_template)

Jinja2 ships with the following loaders, all of which can be used with flask

  • FileSystemLoader
  • PackageLoader
  • DictLoader
  • FunctionLoader
  • PrefixLoader
  • ChoiceLoader
  • ModuleLoader

Open home webserver to internet

There are several ways to open up your home webserver to the internet, the most popular one involves port forwarding. But it is not something that one can assume to be available in all situations.

If you do have a server already, we can use it to create a tunnel. This essentially involes 2 steps

1. create tunnel from home server to remote server

This creates a tunnel between home_port and remote_port_x.

ssh -R <remote-port-x>:<home_localhost>:<home_port> <remote.server.com>

Example:

[8080] = remote port
[5000] = local port

ssh -R 8080:localhost:5000 [email protected]

2. make remote server port internet aware

By default, opening a reverse tunnel, will only bind it to the loopback interface. Which means, home computer will be accessible from localhost:<remote-port-x>, but not from <remote.server.com>:<remote-port-x>.

There are multiple ways to solve this

a. enable GatewayPorts

  1. Open /etc/ssh/sshd_conf on the remote server
  2. Set GatewayPorts to either yes or clientspecified
  3. Restart ssh daemon
    ubuntu - sudo service ssh restart

Ensure that you add -g option to step 1, for step a to work.

b. create local tunnel

Since initial tunnel binds to loopback interface, this local-only tunnel, binds it to all interfaces on a different port, thereby exposing it to the internet.

ssh -L 0.0.0.0:<internet_port>:localhost:<remote-port-x> <remote.server.com>

<remote-port-x> is the same port specified in step 1

c. socat or netcat

Either of these tools can be used to relay traffic between the tunnelled port (I haven't tried this yet)

Propogating iOS Gestures to parent views

If a custom UIView adds UITapGestureRecognizer to one of its elements for example say tracking, the default behaviour would prevent it from the gesture being propogated it to its parent views which may be listening for the same gesture.

Gesture propogation can be ensured simply by implementing a UIGestureRecognizerDelegate method.

:::objc
- (void)setup {
	UITapGestureRecognizer *gestuer = [[UITapGestureRecognizer alloc] init];
	gesture.delegate = self;
}

#pragma mark - UIGestureRecognizerDelegate
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
    return YES;
}

Unlike other solutions, this does not reqire the UIView subclass to be aware of the existence any other gesture recognizers.

Comparing NSDates from different timezones

Comparing 2 NSDate's is trivial, unless they are from different timezone, in which case one of them should be converted to the other timezone and then compared.

Here is a simple code snippet that I came up with. It gets the offset from GMT in seconds for both timezones and uses the delta to convert a date relative to the other timezone.

:::objc
// Get time and time zone offset from GMT
NSDate *currentTime = [NSDate date];
NSInteger currentTZSec = [[NSTimeZone defaultTimeZone] secondsFromGMT];

NSDate *otherTime = info.otherTime;
NSInteger otherTZSec = [info.otherTimeZone secondsFromGMT];

// Get timezone delta difference in seconds
NSInteger deltaTZSec = currentTZSec - otherTZSec;

// Convert other time into current timezone
NSDate *otherTimeInUserTZ = [otherTime dateByAddingTimeInterval:deltaTZSec];

// Check if cancellation until time is past current user time
if ( [otherTimeInUserTZ compare:currentTime] == NSOrderedDescending ) {
	NSLog(@"currentTIme < otherTime");
} else {
	NSLog(@"currentTIme > otherTime");
}

Zhang-Suen Thinning Algorithm

Fiddling around with ideas on how to break captchas, I realised one of the first steps towards it could be the simplification of the input image, into a simple node graph (after elimination of background artefacts). From the node graph, it should be trivial to classify a particular graph structure into its corresponding ASCII symbols, without delving into neural networks.

To construct node graph, it seemed logical to thin down symbols into simple lines, which is when I came across Zhang-Suen Thinning algorithm (pdf paper).

Before:

After:

After implementing the algorithm I realised that although its quite efficient at simplifying the symbols, it is insufficient. More work needs to be done on this.

Python implementation:

:::python
import sys
from PIL import Image
from pprint import pprint as pp

C_BLACK = 0
C_WHITE = 1

####################################################################
#	Helper functions for working with data returned as a single 
#	array by list(img.getdata()) method
####################################################################

# is the pixel black or white
# assuming col represents gray color in (r,g,b) format
def _isbw(col):
	c = 240 
	if col[0] < c and col[1] < c and col[2] < c:
		col = C_BLACK
	else:
		col = C_WHITE

	return col

def _getcoord( size, pos ):
	x,y = pos
	w,h = size
	i = (y * w) + x
	return i

def _getbw( imgdata, size, pos ):
	return imgdata[ _getcoord(size,pos) ]
	
def _setbw( imgdata, size, pos, col ):
	imgdata[ _getcoord(size,pos) ] = col

def _getbwdata( img ):
	d = list(img.getdata())
	for i, c in enumerate(d):
		d[ i ] = _isbw( c )
		# print i, c, d[ i ]
	return d


####################################################################
#	Algorithm implementation
####################################################################

# step1_func = lambda parr: p2 + p4 + p6 > 0 and p4 + p6 + p8 > 0
# step2_func = lambda parr: p2 + p4 + p8 > 0 and p2 + p6 + p8 > 0
step1_func = lambda parr: parr[0] + parr[2] + parr[4] > 0 and parr[2] + parr[4] + parr[6] > 0
step2_func = lambda parr: parr[0] + parr[2] + parr[6] > 0 and parr[0] + parr[4] + parr[6] > 0

def do_step(imgdata, size, func):
	was_modified = False
	for j in range(1,h-1):
		for i in range(1,w-1):
			p1 = _getbw( imgdata, size, ( i,  j   ) )
			p2 = _getbw( imgdata, size, ( i,  j-1 ) )
			p3 = _getbw( imgdata, size, ( i+1,j-1 ) )
			p4 = _getbw( imgdata, size, ( i+1,j   ) )
			p5 = _getbw( imgdata, size, ( i+1,j+1 ) )
			p6 = _getbw( imgdata, size, ( i,  j+1 ) )
			p7 = _getbw( imgdata, size, ( i,  j+1 ) )
			p8 = _getbw( imgdata, size, ( i-1,j   ) )
			p9 = _getbw( imgdata, size, ( i-1,j-1 ) )

			# nimg.putpixel( (i,j),  )
			# nimg.putpixel( (i,j), p1 )
			A_Val  = (p2 == 0 and p3 == 1) + (p3 == 0 and p4 == 1) 
			A_Val += (p4 == 0 and p5 == 1) + (p5 == 0 and p6 == 1) 
			A_Val += (p6 == 0 and p7 == 1) + (p7 == 0 and p8 == 1)
			A_Val += (p8 == 0 and p9 == 1) + (p9 == 0 and p2 == 1)

			B_Val = sum([p2,p3,p4,p5,p6,p7,p8,p9])
			parr = [p2,p3,p4,p5,p6,p7,p8,p9,p2]

			if p1 == C_BLACK:
				if 2 <= B_Val <= 6:
					if A_Val == 1:
						if func(parr):
							_setbw( imgdata, size, (i,j), C_WHITE )
							was_modified = True
							# imgdata.putpixel( (i,j), C_WHITE )
	return (imgdata, was_modified)
								

####################################################################
#	Work on image / main
####################################################################

if __name__ == '__main__':
	imgname = 'abcd.jpg'
	img = Image.open(imgname)
	w, h = img.size

	""" The data is returned as a single array """
	pixels = list(img.getdata())

	# Create black and white pixel bitmap image
	nimg = Image.new('1', img.size, -1 )

	# Convert source image to black and white pixels
	bwdata = _getbwdata( img )

	# Run the algorithm until no further modifications are required
	is_modified = True
	while is_modified:

		bwdata, modified1 = do_step(bwdata,img.size,step1_func)
		bwdata, modified2 = do_step(bwdata,img.size,step2_func)

		is_modified = modified1 | modified2

		print is_modified, modified1, modified2

	# Push the data to image
	nimg.putdata( bwdata )
	nimg.show()

	## And save
	fp = open('.abcd_output.jpg','w')
	nimg.save(fp)
	fp.close()