OK, after a chat with Cliff I have taken a step back.
The idea I had at the top level was (a) simplify the polygons to non overlapping simple contours, (b) work out how they are nested, (c) cut out the ones I want.
The main reason for that logic is that the final stage allows a variety of operations such as union, intersection, difference and so on, very simply. I had assumed the second stage of finding the nesting would be easy (and quick).
I now find that working out which polygon is inside which is complex in the sense you have to walk around the edge to find where they diverge. This is not very efficient. The first stage was fun as well as it meant working our how the overlaps worked, and multiple overlaps, so as to ensure creating simple polygons even when lots intersected at a point.
So, thinking about it from scratch, I realise how the general clipping library probably does it. They take an operation as part of the clipping logic (the first stage). The problem I have is they did not seem to care which way around the input polygons went and only seem to use an odd/even logic where I want proper winding number logic.
Even so, the way to do it is to take the ordered list of line segments and process them. It is a sweep over the polygons left to right. In doing so you may well encounter multiple line segments, one top of each other. The nice thing is you do not care which segment is from which polygon - they are just edges that you are encountering. I do care what direction they are going, but that just affects the way you count them. Basically you know if they represent an edge you are interested in or not by how they move the winding number as you pass them. If looking for an intersection, for example, you are looking for winding number crossing 2, so it matters not if they go from 0 to 10 in one go (lots of polygons on top of each other), that means put just one line there as the edge of your intersection. For union you look for the edge on winding number 1.
It solves the issue of working out which polygon is inside which. As you sweep you create the new set of polygons from the output. You actually have a set of fragments that you add line segments to either end of and they end up closing neatly. This works well for union or intersection. For difference you need to reverse the layer 2 that you find to make it a hole but still, at each line segment, you only make one line to add to your output. I suspect vertical lines on the horizontal sweep need some additional care, but still, not that complex.
It also works to simplify a single, possibly self intersecting, polygon by doing a union on it.
I'll ponder that design as a new approach. Shame, as I liked the general idea with the winding rule type logic. However, this is a better approach for overall speed and reliable output.
Thanks for asking the questions, Cliff. I was too deep in my idea of finding the nesting level to take a step back.
2011-07-31
2011-07-30
Winding me up!
There are a couple of main jobs my polygon library has to do. One is making a set of overlapping polygons in to a set of simple non overlapping polygons. The other is working out which polygons are inside others so that we can do the basic Boolean maths on them.
The clip function turned out to be relatively simple, thankfully. And I also have a function to tidy up the polygons removing dead ends and unnecessary mid points on straight lines.
The problem is working out which polygons are inside others. My plan was simply to use the winding number logic, which as I said was simple! Basically, to find the winding number of a point, draw a line to infinity (I chose a line going left) and see what lines you cross. If they cross one way the winding number goes up, the other way it goes down. There are a few edge cases (sorry about the pun) where your line just touches another polygon but not crossing it, but correct use of greater than or equals in the right places and you avoid problems.
My plan was to find the winding number of each polygon. I.e. if it was clockwise the winding number of points immediately inside it, and if anti-clockwise the winding number of points immediately outside it. That is the winding number of points on your right as you walk around the polygon. That way I can do simple logic like union which means just keeping all polygons with winding number of 1 or intersection which means keeping all those with winding number of 2.
My plan was to take the left most point of each polygon (which I also use to work out if clockwise or anti-clockwise) and find the winding number of that point. Simples! or so I thought.
Of course I was immediately hit by a problem - which was pretty obvious. When working on the 3D models I am doing operations on adjacent slices to help find the top and bottom layers of things. This meant I was often working on identical polygons on top of each other. The clip logic already removed any segments that cancel each other out (i.e. clockwise on top of anti clockwise), but when two polygons are the same they were getting the same winding number (Doh!). The system needed to make an arbitrary decision and put one logically inside the other so giving one a bigger winding number.
My naive solution was to have a special case for where the X point I found was the same as I started with rather than on the left, and only count if it was on a polygon earlier in my list. This did work for the case of identical polygons as it meant one polygon was logically inside the other.
Sadly that was naive in the extreme. Such cases were just a special case of the more general problem where the point I am looking at is the same. If the polygons are in fact different, perhaps on the right, then clearly one is inside the other - you just cannot tell from the one point you are looking at.
So the plan now is to handle the case where the point is the same by following the polygons round until we either confirm it is the same all round (and then pick first in the list of polygons as a decider), or one branches off left or right. Thankfully the case where they go different directions is already catered for in the clip logic as the segments cancel out each other - so this means I can rely on the polygons going the same way (my link list only goes forward!). I have also removed points in the middle of straight lines. So I can just keep going until the points are different and then decide if this is a turn left or right of my original polygon.
I'll try the logic later, when I have done some real work.
The clip function turned out to be relatively simple, thankfully. And I also have a function to tidy up the polygons removing dead ends and unnecessary mid points on straight lines.
The problem is working out which polygons are inside others. My plan was simply to use the winding number logic, which as I said was simple! Basically, to find the winding number of a point, draw a line to infinity (I chose a line going left) and see what lines you cross. If they cross one way the winding number goes up, the other way it goes down. There are a few edge cases (sorry about the pun) where your line just touches another polygon but not crossing it, but correct use of greater than or equals in the right places and you avoid problems.
My plan was to find the winding number of each polygon. I.e. if it was clockwise the winding number of points immediately inside it, and if anti-clockwise the winding number of points immediately outside it. That is the winding number of points on your right as you walk around the polygon. That way I can do simple logic like union which means just keeping all polygons with winding number of 1 or intersection which means keeping all those with winding number of 2.
My plan was to take the left most point of each polygon (which I also use to work out if clockwise or anti-clockwise) and find the winding number of that point. Simples! or so I thought.
Of course I was immediately hit by a problem - which was pretty obvious. When working on the 3D models I am doing operations on adjacent slices to help find the top and bottom layers of things. This meant I was often working on identical polygons on top of each other. The clip logic already removed any segments that cancel each other out (i.e. clockwise on top of anti clockwise), but when two polygons are the same they were getting the same winding number (Doh!). The system needed to make an arbitrary decision and put one logically inside the other so giving one a bigger winding number.
My naive solution was to have a special case for where the X point I found was the same as I started with rather than on the left, and only count if it was on a polygon earlier in my list. This did work for the case of identical polygons as it meant one polygon was logically inside the other.
Sadly that was naive in the extreme. Such cases were just a special case of the more general problem where the point I am looking at is the same. If the polygons are in fact different, perhaps on the right, then clearly one is inside the other - you just cannot tell from the one point you are looking at.
So the plan now is to handle the case where the point is the same by following the polygons round until we either confirm it is the same all round (and then pick first in the list of polygons as a decider), or one branches off left or right. Thankfully the case where they go different directions is already catered for in the clip logic as the segments cancel out each other - so this means I can rely on the polygons going the same way (my link list only goes forward!). I have also removed points in the middle of straight lines. So I can just keep going until the points are different and then decide if this is a turn left or right of my original polygon.
I'll try the logic later, when I have done some real work.
2011-07-29
Busy week, busy weekend
Well, weekend approacheth,
I have a load of things to do resulting from FireBrick meeting and FireBrick course this week. It has been a busy week topped with feeling like crap yesterday evening. Most of the cosmetic stuff (UI layout issues) sorted, but a couple of bugs, and I really hate knowing bugs are there in the code. In fact I end up with trouble sleeping as I find I am debugging the code in my sleep. It gets really a tad surreal when I realise next day that I have not in fact found the and fixed the bugs, just done it in my dreams and now I have to fix the code for real - usually exactly where I dreamed the bug was located. Anyone else out there debug code in their sleep? If only I was paid by the hour and could claim for my time :-)
Then I want to play with my polygon libraries. It is tantalizingly close - and should be really cool if only I can sort this damn coffee machine hopper extension. It is not that complex, and so I have something stupidly simple to fix. Just that all of the test shapes I run are fine, so I have to start with the real shape which is broken and reduce it to the basics to find the cause. Arrrg...
I am thinking now I will publish the polygon library on it's own as open source when I finally have it cracked. I am determined to avoid loads of horrid special cases (code like that is either wrong or following an ETSI/ITU standard).
So, fun weekend as usual.
I have a load of things to do resulting from FireBrick meeting and FireBrick course this week. It has been a busy week topped with feeling like crap yesterday evening. Most of the cosmetic stuff (UI layout issues) sorted, but a couple of bugs, and I really hate knowing bugs are there in the code. In fact I end up with trouble sleeping as I find I am debugging the code in my sleep. It gets really a tad surreal when I realise next day that I have not in fact found the and fixed the bugs, just done it in my dreams and now I have to fix the code for real - usually exactly where I dreamed the bug was located. Anyone else out there debug code in their sleep? If only I was paid by the hour and could claim for my time :-)
Then I want to play with my polygon libraries. It is tantalizingly close - and should be really cool if only I can sort this damn coffee machine hopper extension. It is not that complex, and so I have something stupidly simple to fix. Just that all of the test shapes I run are fine, so I have to start with the real shape which is broken and reduce it to the basics to find the cause. Arrrg...
I am thinking now I will publish the polygon library on it's own as open source when I finally have it cracked. I am determined to avoid loads of horrid special cases (code like that is either wrong or following an ETSI/ITU standard).
So, fun weekend as usual.
Sonim 7 bites the dust
She has done it again - well, to be honest we left it quite a while.
Key broken off.
Back broken but still holding together just.
SIM keeps reporting not present.
The clincher was her dropping it in the bath and it drowning.
What can I say!
Key broken off.
Back broken but still holding together just.
SIM keeps reporting not present.
The clincher was her dropping it in the bath and it drowning.
What can I say!
2011-07-27
Blood sugar
Being diabetic, as I am, I have to worry about my blood sugar level.
The metric we use is mmol/l, though that is changing it seems! It is a shame as I am so used to it for many many years, what with my mother being on insulin since a few years after I was born. From an early age I grew up understanding about blood sugar levels.
What I was always told, and is apparently still the norm, is 4 to 7 mmol/l is normal for before a meal.
Now, maybe the meter I use is out or maybe I am abnormal (likely) but practical experience is that 4-7 is way out for normal... If my BM is 5.5 I am losing concentration. If 4.5 I am shaking. If below 4 I am feeling ill. So 4-7 is not normal for me - more like 6-7 or 6-8...
If I am not alone - please say!
What is specially odd is that following the normal rules is not working. I am on metformin, gliclazide, and sitagliptin now, and not yet insulin injections (phew). The gliclazide is meant to give my pancreas a kick, and does - especially with something to work on!
If I have no breakfast I feel peckish in the afternoon and blood sugar is fine, but if I have 500ml of lucozade and a gliclazide at breakfast then I suffer a few hours later - yes BM goes up and a few hours later is not just down but by lunch time is in the 4.X region and I feel ill. But not having breakfast is fine. So some how it works giving a kick to my pancreas and some glucose as well! The good thing is all morning I am way more with it than otherwise. So much for low GI foods as per the normal rules for being diabetic.
Seems a kick is what I need. We'll see how it goes in the long term.
Though alcohol in the evenings is also a factor I expect :-)
The metric we use is mmol/l, though that is changing it seems! It is a shame as I am so used to it for many many years, what with my mother being on insulin since a few years after I was born. From an early age I grew up understanding about blood sugar levels.
What I was always told, and is apparently still the norm, is 4 to 7 mmol/l is normal for before a meal.
Now, maybe the meter I use is out or maybe I am abnormal (likely) but practical experience is that 4-7 is way out for normal... If my BM is 5.5 I am losing concentration. If 4.5 I am shaking. If below 4 I am feeling ill. So 4-7 is not normal for me - more like 6-7 or 6-8...
If I am not alone - please say!
What is specially odd is that following the normal rules is not working. I am on metformin, gliclazide, and sitagliptin now, and not yet insulin injections (phew). The gliclazide is meant to give my pancreas a kick, and does - especially with something to work on!
If I have no breakfast I feel peckish in the afternoon and blood sugar is fine, but if I have 500ml of lucozade and a gliclazide at breakfast then I suffer a few hours later - yes BM goes up and a few hours later is not just down but by lunch time is in the 4.X region and I feel ill. But not having breakfast is fine. So some how it works giving a kick to my pancreas and some glucose as well! The good thing is all morning I am way more with it than otherwise. So much for low GI foods as per the normal rules for being diabetic.
Seems a kick is what I need. We'll see how it goes in the long term.
Though alcohol in the evenings is also a factor I expect :-)
2011-07-26
Bricking it
Well, I have the first proper training course for the new FireBricks starting tomorrow. Two day course.
It is a tricky topic as you end up spending your time explaining basic IP routing if you are not careful (and that is a separate course).
The way the new FB2700 and FB2500 work is a tad different to the older FB105 FireBricks, which is not surprising as we re-wrote the whole thing from scratch. This makes the training course slightly more complex. If someone knows the FB105 we have to cover all the differences, but if they don't we can explain the new system from scratch.
The main difference is the underlying routing logic. The FB105 made routing very much tied in to the session tracking logic at a low level. The session tracking was the routing cache. So the basic logic for establishing a session also defined the routing. It meant the routing rules were a list of rules (match the first you find) defining where the packets were to go, and that stuck for the session.
The new system is not that far off in some ways as there is a session based routing override, but at its heart the new FireBricks use conventional routing logic. This means you decide where a packet goes based on the target IP address and the current route you have (most specific applies). This is different to a 105 as it would work on a rule list not a most specific routing rule, and is also not per-packet. The new routing is based on static routes, profiles, and BGP and all sorts and can change per packet - like normal routers.
However the new FireBricks have a trick up their sleeve - there have per session logic to allow or deny the session, obviously, but that can set a new gateway for routing for the session. This works using a route override table checked at the session set-up just like the 105 and kept for the whole session. Unlike the 105, instead of saying where the packet goes directly it says indirectly by saying a new target IP for routing purposes. This allows routing based on protocol and source IP just like the 105, but as the target is itself just an IP it allows the target to be subject to routing rules as they change in real time. The end result is a lot more flexible, especially when looking at fall-back type arrangements where you want routing to change on the fly for an established session.
Of course, that is not the only change - but it is probably the most deep change to try and explain. We have a totally new web user interface, and a new idea of a config that is all in XML (with web based editing tools). One of the biggest changes is that IPv6 is fully supported and pretty seamless. Basically, almost anywhere you can put an IP address you can put either IPv4 or IPv6. At present DHCP settings are an exception but even that will probably change. We even do new VRRP3 so IPv4 and IPv6 are just interchangeable on VRRP settings.
The new FireBricks then have a load of new features like L2TP and BGP, but they are not too hard to explain.
Should be a fun course.
Next month we are considering doing a one-day course on this for end users rather than dealers, and I would be interested to hear if anyone wants to go on that. No idea on course pricing yet - catch me on irc.
It is a tricky topic as you end up spending your time explaining basic IP routing if you are not careful (and that is a separate course).
The way the new FB2700 and FB2500 work is a tad different to the older FB105 FireBricks, which is not surprising as we re-wrote the whole thing from scratch. This makes the training course slightly more complex. If someone knows the FB105 we have to cover all the differences, but if they don't we can explain the new system from scratch.
The main difference is the underlying routing logic. The FB105 made routing very much tied in to the session tracking logic at a low level. The session tracking was the routing cache. So the basic logic for establishing a session also defined the routing. It meant the routing rules were a list of rules (match the first you find) defining where the packets were to go, and that stuck for the session.
The new system is not that far off in some ways as there is a session based routing override, but at its heart the new FireBricks use conventional routing logic. This means you decide where a packet goes based on the target IP address and the current route you have (most specific applies). This is different to a 105 as it would work on a rule list not a most specific routing rule, and is also not per-packet. The new routing is based on static routes, profiles, and BGP and all sorts and can change per packet - like normal routers.
However the new FireBricks have a trick up their sleeve - there have per session logic to allow or deny the session, obviously, but that can set a new gateway for routing for the session. This works using a route override table checked at the session set-up just like the 105 and kept for the whole session. Unlike the 105, instead of saying where the packet goes directly it says indirectly by saying a new target IP for routing purposes. This allows routing based on protocol and source IP just like the 105, but as the target is itself just an IP it allows the target to be subject to routing rules as they change in real time. The end result is a lot more flexible, especially when looking at fall-back type arrangements where you want routing to change on the fly for an established session.
Of course, that is not the only change - but it is probably the most deep change to try and explain. We have a totally new web user interface, and a new idea of a config that is all in XML (with web based editing tools). One of the biggest changes is that IPv6 is fully supported and pretty seamless. Basically, almost anywhere you can put an IP address you can put either IPv4 or IPv6. At present DHCP settings are an exception but even that will probably change. We even do new VRRP3 so IPv4 and IPv6 are just interchangeable on VRRP settings.
The new FireBricks then have a load of new features like L2TP and BGP, but they are not too hard to explain.
Should be a fun course.
Next month we are considering doing a one-day course on this for end users rather than dealers, and I would be interested to hear if anyone wants to go on that. No idea on course pricing yet - catch me on irc.
2011-07-25
Polygon libraries
Well, I did a bit more on the 3D modeling stuff.
I found an interesting general polygon clipper library. So I figured it would do as an interim step until I made my own. I re-coded everything to use it, and then banged my head against the wall a lot. I then used svn to rewind everything about 5 hours...
Basically it is quite a nice library. It seems to work. It has a good simple interface.
However, looking at it, it seems complicated. It seems to classify each vertex in one of 16 different ways and have code for handling each. That seems wrong to me - the algorithm should be simpler and neater than that. It just feels wrong.
So I slept on it and worked out how the logic should - I think.
Step 1: Find mid-line intersections and create new point
This could be done brute force, but there are some optimisations that can be done. You are looking for any line that crosses or touches any other line and adding a point where that happens. The result is no crossing lines, but paths that have points which are the same as points on other paths.
Step 2: Find points that intersect
This comes out of the finding crossing lines basically. Each endpoint that intersects will be part of two joined segments, one or more times.
Step 3: Work out if the intersecting endpoints cross
This can be done with angles. Imagine AXB and CXD are the possibly crossed lines, X is the point they cross. So it appears in line segments AX then XB, and CX then DX. You can look at the angles of each of the four points (A, B, C, D) around X and work out if the paths cross.
Thankfully you don't need real angles for this (slow atan logic) you just need something you can compare, and that can be done by calculating the relative position of B, C and D relative to AB using multiplication and division only so nice and quick.
Step 4: Uncross them
You just change AXB and CXD to AXD and CXB, which means instead of crossing they bounce off the mid point just touching each other. You can easily do this with multiple lines on the same point.
The end result means turning crossed over polygons in to a set of simple polygons that do not cross. You need data structures such that you can easily splice lists of points, and can mop up loops you create and would be left over. That is not too hard to do.
Step 5: Winding number
Then work out if each path is clockwise or anti -clockwise and the winding number. I think the logic here is the winding number on your right as you traverse the path, (so the winding number of the inside of a clockwise path, or outside of an anti-clockwise path) which means a clockwise path and an anti-clockwise hole within it have the same winding number and so stay together when we do Boolean operations (see below). Winding number is not too hard to work out especially knowing the paths no longer intersect.
Step 6: Boolean maths
This is where the library fell down for me - I needed different logic for self intersecting shapes than they used. It seems that they did nothing with winding number, so everything was odd/even. However, having clipped a shape to simple polygons you can do boolean logic quite simply. e.g. keep all paths with winding number 1 and you have a union function. Keep winding number 1 but reverse winding number 2 and you have difference. Keep only winding number 2 and you have intersection. Simples. Well, I hope so - it seems simply in my head and I hope I have the simple logic right.
So, the plan is to make a new polygon library of my own. I am making it as a general purpose library and will probably open source it.
I found an interesting general polygon clipper library. So I figured it would do as an interim step until I made my own. I re-coded everything to use it, and then banged my head against the wall a lot. I then used svn to rewind everything about 5 hours...
Basically it is quite a nice library. It seems to work. It has a good simple interface.
However, looking at it, it seems complicated. It seems to classify each vertex in one of 16 different ways and have code for handling each. That seems wrong to me - the algorithm should be simpler and neater than that. It just feels wrong.
So I slept on it and worked out how the logic should - I think.
Step 1: Find mid-line intersections and create new point
This could be done brute force, but there are some optimisations that can be done. You are looking for any line that crosses or touches any other line and adding a point where that happens. The result is no crossing lines, but paths that have points which are the same as points on other paths.
Step 2: Find points that intersect
This comes out of the finding crossing lines basically. Each endpoint that intersects will be part of two joined segments, one or more times.
Step 3: Work out if the intersecting endpoints cross
This can be done with angles. Imagine AXB and CXD are the possibly crossed lines, X is the point they cross. So it appears in line segments AX then XB, and CX then DX. You can look at the angles of each of the four points (A, B, C, D) around X and work out if the paths cross.
Thankfully you don't need real angles for this (slow atan logic) you just need something you can compare, and that can be done by calculating the relative position of B, C and D relative to AB using multiplication and division only so nice and quick.
Step 4: Uncross them
You just change AXB and CXD to AXD and CXB, which means instead of crossing they bounce off the mid point just touching each other. You can easily do this with multiple lines on the same point.
The end result means turning crossed over polygons in to a set of simple polygons that do not cross. You need data structures such that you can easily splice lists of points, and can mop up loops you create and would be left over. That is not too hard to do.
Step 5: Winding number
Then work out if each path is clockwise or anti -clockwise and the winding number. I think the logic here is the winding number on your right as you traverse the path, (so the winding number of the inside of a clockwise path, or outside of an anti-clockwise path) which means a clockwise path and an anti-clockwise hole within it have the same winding number and so stay together when we do Boolean operations (see below). Winding number is not too hard to work out especially knowing the paths no longer intersect.
Step 6: Boolean maths
This is where the library fell down for me - I needed different logic for self intersecting shapes than they used. It seems that they did nothing with winding number, so everything was odd/even. However, having clipped a shape to simple polygons you can do boolean logic quite simply. e.g. keep all paths with winding number 1 and you have a union function. Keep winding number 1 but reverse winding number 2 and you have difference. Keep only winding number 2 and you have intersection. Simples. Well, I hope so - it seems simply in my head and I hope I have the simple logic right.
So, the plan is to make a new polygon library of my own. I am making it as a general purpose library and will probably open source it.
2011-07-24
3D coding
Well, I decided to have a bit of fun this weekend.
Basically, the 3D printing need several bits of software.
What I have been playing with this weekend is the STL to GCODE bit. STL is a file format that basically just gives you a list of 3D triangles which you hope make the surface of something solid. The software basically turns that in to print instructions as to where to move the head (X, Y and Z), how fast (F) and how much plastic to extrude (E). You need to consider the outline/surface, filled layers, partly filled layers, layers that are flying over thin area, speed, temperature, all sorts. Skeinforge does a damn good job, and has lots of plug-ins, but is all in python. So far I have ended up writing a post processor for it to lift the head when moving over existing printed output to reduce problems.
Naturally I want to have a bash at this as it is a challenge, and I am doing it in C to be faster as well.
First the STL is sliced in to 2D closed loops that are the solid area for each layer in the model. That bit was actually very easy, though some of the STL files need a slight cleanup as they have flat interior surfaces facing each other and taking no space which makes 2D shapes with dead-end cuts in to the shape, rather annoyingly. It also quite easily makes split straight lines (i.e. unnecessarily two segments rather than one) as you need two triangles for a flat rectangular surface. I have cleanup that handles both, but probably needs to be smarter.
Then you need a toolkit of 2D stuff, and that is where it gets more messy. Obviously primitives like "how far is a point from a line" and "where do two lines intersect" is easy enough. I could remember some from school, but googling you find plenty of worked examples.
Another key thing is the ability to inset a new path that is a distance inside an existing path. In principle this is simple, but you have to allow for mitered corners that are too long and interior corners that consume the line totally, however it was pretty straight forward. The big problem though is bits that overlap or go inside out because the original area is too small at that point. I have the inset working pretty well so can make the perimeter extrude path for each layer now. I even have some overlap code, but messy and not fool proof yet.
So this is where the proper 2D polygon toolkit comes in - you need to identify intersections within a polygon and remove overlap resulting from an inset. You need to be able to perform some key operations on multiple polygons, including union, intersection and difference.
It turns out that this is pretty simple to brute force this, which is what I started with, but does not scale well for speed. There are, it seems some key algorithms such as Bentley-Ottmann which are quick ways to find intersections between polygons. Of course once I have the intersections I then have to carve up what I get to make union, intersection and difference logic, but that bit should not be too hard.
So, I have only got as far as trying to code these 2D tools. Once done, I can then work out the areas to be filled - either solid (because close to top or bottom of an object) or sparse (because totally interior to the shape) - and also where areas are flying over thin air! For that I plan to make them outset in to the adjacent areas to create an anchor for the fill, and working out what is a sensible fill direction to span the space. The fill algorithm itself should not be that complex once I have pinned down the polygons to be filled. This is an area where skienforge is pretty crude, and I hope I can do better.
Finally, once I have the extrude path for perimeter and fill, I have to check for internal overlaps and sharp corners and adjust flow rate for the exact material to extrude to fill the space correctly. Then I can determine the E (extrude) parameters for each path to be extruded, and finally make actual GCODE.
So, maybe half way there. Maybe next weekend I'll finish it. I had hoped it was a one-weekend project, but sadly not, especially with stopping to watch the F1 (well done Lewis).
I hope end result is better than skeinforge, or at least as good. It will be different. I think it will be a lot faster (signs so far are that it is). But the main thing is I will have learned a lot doing it - which is why it is fun in the first place.
Basically, the 3D printing need several bits of software.
- Something to let you design 3D models - currently using openscad
- Something to processed the output STL in to GCODE, currently using skeinforge
- Something to send the GCODE to the printer and otherwise control the printer - currently using repsnapper
- Firmware in the printer - currently using my hacked about version of shapercube code
What I have been playing with this weekend is the STL to GCODE bit. STL is a file format that basically just gives you a list of 3D triangles which you hope make the surface of something solid. The software basically turns that in to print instructions as to where to move the head (X, Y and Z), how fast (F) and how much plastic to extrude (E). You need to consider the outline/surface, filled layers, partly filled layers, layers that are flying over thin area, speed, temperature, all sorts. Skeinforge does a damn good job, and has lots of plug-ins, but is all in python. So far I have ended up writing a post processor for it to lift the head when moving over existing printed output to reduce problems.
Naturally I want to have a bash at this as it is a challenge, and I am doing it in C to be faster as well.
First the STL is sliced in to 2D closed loops that are the solid area for each layer in the model. That bit was actually very easy, though some of the STL files need a slight cleanup as they have flat interior surfaces facing each other and taking no space which makes 2D shapes with dead-end cuts in to the shape, rather annoyingly. It also quite easily makes split straight lines (i.e. unnecessarily two segments rather than one) as you need two triangles for a flat rectangular surface. I have cleanup that handles both, but probably needs to be smarter.
Then you need a toolkit of 2D stuff, and that is where it gets more messy. Obviously primitives like "how far is a point from a line" and "where do two lines intersect" is easy enough. I could remember some from school, but googling you find plenty of worked examples.
Another key thing is the ability to inset a new path that is a distance inside an existing path. In principle this is simple, but you have to allow for mitered corners that are too long and interior corners that consume the line totally, however it was pretty straight forward. The big problem though is bits that overlap or go inside out because the original area is too small at that point. I have the inset working pretty well so can make the perimeter extrude path for each layer now. I even have some overlap code, but messy and not fool proof yet.
So this is where the proper 2D polygon toolkit comes in - you need to identify intersections within a polygon and remove overlap resulting from an inset. You need to be able to perform some key operations on multiple polygons, including union, intersection and difference.
It turns out that this is pretty simple to brute force this, which is what I started with, but does not scale well for speed. There are, it seems some key algorithms such as Bentley-Ottmann which are quick ways to find intersections between polygons. Of course once I have the intersections I then have to carve up what I get to make union, intersection and difference logic, but that bit should not be too hard.
So, I have only got as far as trying to code these 2D tools. Once done, I can then work out the areas to be filled - either solid (because close to top or bottom of an object) or sparse (because totally interior to the shape) - and also where areas are flying over thin air! For that I plan to make them outset in to the adjacent areas to create an anchor for the fill, and working out what is a sensible fill direction to span the space. The fill algorithm itself should not be that complex once I have pinned down the polygons to be filled. This is an area where skienforge is pretty crude, and I hope I can do better.
Finally, once I have the extrude path for perimeter and fill, I have to check for internal overlaps and sharp corners and adjust flow rate for the exact material to extrude to fill the space correctly. Then I can determine the E (extrude) parameters for each path to be extruded, and finally make actual GCODE.
So, maybe half way there. Maybe next weekend I'll finish it. I had hoped it was a one-weekend project, but sadly not, especially with stopping to watch the F1 (well done Lewis).
I hope end result is better than skeinforge, or at least as good. It will be different. I think it will be a lot faster (signs so far are that it is). But the main thing is I will have learned a lot doing it - which is why it is fun in the first place.
2011-07-22
OFCOM, again, again
Now we have the location data sorted, I have been trying to work out ways to help with providing more reliable and accurate data to emergency services for calls to 999/112. This is specially important where we don't have a location for a VoIP number or the caller is using from more than one place (nomadic).
The cunning plan I came up with works because a lot of our VoIP customers are also our Internet customers, and that is likely to be common with lots of ISP+VoIP providers. Also, a lot of our Internet customers are on DSL lines.
Basically, if we have no location data but the call comes from one of our IPs that is on a broadband line - we can find the phone number of the broadband line (a BT phone number on BT exchange).
So why not send that as the calling number on the 999 call. That way they get the location spot on.
This has huge advantages over the alternative proposals for handling nomadic callers.
I suggested to OFCOM. They shoot it down in flames because it means we would be relying on BT to have accurate location data and we have no way to guarantee that!
What?!?!??! This would be providing location data that is very very likely to be very accurate instead of no location data. Clearly a huge step in the right direction. And OFCOM think we should not do it.
I wonder why I bother.
P.S. More ideas: we could even have contracts with other ISPs such that we would pass the 999 call to them to pass to 999 if it is one of their IP addresses so they can insert the CLI on the way (not tell us the CLI) - as long as we have a short timeout fallback if they don't reply or reject the call so we can send on without location data anyway. Of course the call media can go direct from their IP to their 999 gateway then reducing the interdependencies.
The cunning plan I came up with works because a lot of our VoIP customers are also our Internet customers, and that is likely to be common with lots of ISP+VoIP providers. Also, a lot of our Internet customers are on DSL lines.
Basically, if we have no location data but the call comes from one of our IPs that is on a broadband line - we can find the phone number of the broadband line (a BT phone number on BT exchange).
So why not send that as the calling number on the 999 call. That way they get the location spot on.
This has huge advantages over the alternative proposals for handling nomadic callers.
- 999 service need no new protocols or systems to get the location from IP
- End user has a typically familiar number quoted on the call to avoid any confusion
- The area code of the number is what the 999 operator is expecting for the location (often not the case with VoIP services)
- The VoIP+ISP provider is not having to maintain the location data
- It can be implemented by many ISP+VoIP providers and is self contained
- It has none of the privacy issues or feature creep of NICC ND1638
I suggested to OFCOM. They shoot it down in flames because it means we would be relying on BT to have accurate location data and we have no way to guarantee that!
What?!?!??! This would be providing location data that is very very likely to be very accurate instead of no location data. Clearly a huge step in the right direction. And OFCOM think we should not do it.
I wonder why I bother.
P.S. More ideas: we could even have contracts with other ISPs such that we would pass the 999 call to them to pass to 999 if it is one of their IP addresses so they can insert the CLI on the way (not tell us the CLI) - as long as we have a short timeout fallback if they don't reply or reject the call so we can send on without location data anyway. Of course the call media can go direct from their IP to their 999 gateway then reducing the interdependencies.
The dark side
Whilst I am not on facebook (well, Thrall is, and I know his password), I have signed up to Google+
Have I really turned to the dark side?
Have I really turned to the dark side?
2011-07-19
Thrall to the rescue
Well, I hope Maureen is OK, whoever she is. Thrall is not known for his work handling emergency calls, but...
Thrall is a 6' fibre-glass orc that lives in our training room at the office. He was a present from a customer (thanks Mike). For those that don't know, Thrall is a character from World of Warcraft, where the main land is called Azeroth. We gave Thrall an ID card, obviously. We called him Thrall Horde, Training Room Supervisor. But when it came to a phone number I gave him Azeroth (02000) 200 000.
Most people assume that is a made up number, as 02000 is not a valid UK area code, obviously, as it would mean a London number starting 00. Now I probably should have given him a number from the blocks reserved for fiction, but I am not sure I wanted it to look like he lived in Walford. In fact I wanted a number that worked, just for fun. What people do not generally know is that 0200 is a special prefix which is used for hidden phone numbers used internally in the telephone system. They are used as the real numbers behind things like 0800 number and are never called directly so don't need to use up normal number spaces. They can however be called, and Thrall's number gets me!
So, this morning, in the bath, I get a call for Thrall (yes, waterproof phone, and all that). Now, he gets very few calls, either wrong numbers, or the odd customer checking if the number is real. Someone even gave the number to some debt collection agency, LOL. However this time I get what sounds like an old lady and I realise she is saying "please help me". She seems not to be able to answer me and after a while she hung up. I called back and got an answering machine explaining that it is Maureen's phone and she is hard of hearing. Eeek!
OK, so this is a case of calling 112. It is rare one calls emergency services. I had not set up location data for my phone in the house (done now!).
The operator started by asking police, fire or ambulance. I explained the call and he said I had to pick one as he can't pick for me! This seems odd, as how am I in any better position to decide - he is the one with training not me, and anyway I thought they were meant to be able to handle silent calls even!
We agreed police, and then he wanted to know where I was. I said Bracknell, and after a while he could not find it and asked me to spell it. Twice I said it would be better, as the caller was a London number, to put me through to police in London, but no he went for Thames Valley instead.
Thankfully the police were much more on the ball - took the details and said they would contact the Met police to check it out. I'll see if I get a follow up call later. Would be nice to know what happened. And I do hope she is all right.
Thrall is a 6' fibre-glass orc that lives in our training room at the office. He was a present from a customer (thanks Mike). For those that don't know, Thrall is a character from World of Warcraft, where the main land is called Azeroth. We gave Thrall an ID card, obviously. We called him Thrall Horde, Training Room Supervisor. But when it came to a phone number I gave him Azeroth (02000) 200 000.
Most people assume that is a made up number, as 02000 is not a valid UK area code, obviously, as it would mean a London number starting 00. Now I probably should have given him a number from the blocks reserved for fiction, but I am not sure I wanted it to look like he lived in Walford. In fact I wanted a number that worked, just for fun. What people do not generally know is that 0200 is a special prefix which is used for hidden phone numbers used internally in the telephone system. They are used as the real numbers behind things like 0800 number and are never called directly so don't need to use up normal number spaces. They can however be called, and Thrall's number gets me!
So, this morning, in the bath, I get a call for Thrall (yes, waterproof phone, and all that). Now, he gets very few calls, either wrong numbers, or the odd customer checking if the number is real. Someone even gave the number to some debt collection agency, LOL. However this time I get what sounds like an old lady and I realise she is saying "please help me". She seems not to be able to answer me and after a while she hung up. I called back and got an answering machine explaining that it is Maureen's phone and she is hard of hearing. Eeek!
OK, so this is a case of calling 112. It is rare one calls emergency services. I had not set up location data for my phone in the house (done now!).
The operator started by asking police, fire or ambulance. I explained the call and he said I had to pick one as he can't pick for me! This seems odd, as how am I in any better position to decide - he is the one with training not me, and anyway I thought they were meant to be able to handle silent calls even!
We agreed police, and then he wanted to know where I was. I said Bracknell, and after a while he could not find it and asked me to spell it. Twice I said it would be better, as the caller was a London number, to put me through to police in London, but no he went for Thames Valley instead.
Thankfully the police were much more on the ball - took the details and said they would contact the Met police to check it out. I'll see if I get a follow up call later. Would be nice to know what happened. And I do hope she is all right.
2011-07-17
Weather radar
I did very badly trying to find any sort of live weather map for UK.
The apps I could find were all US based, and as we were driving through some rather stormy weather back from Harlow I was trying to find something a tad more live. Saying the forecast for here and now was "light showers" did not cut it given the nature of the storms we were driving through.
Ironically the best overview I could find was the A&A storm tracker which geographically maps where lines are losing sync on ADSL!
The apps I could find were all US based, and as we were driving through some rather stormy weather back from Harlow I was trying to find something a tad more live. Saying the forecast for here and now was "light showers" did not cut it given the nature of the storms we were driving through.
Ironically the best overview I could find was the A&A storm tracker which geographically maps where lines are losing sync on ADSL!
2011-07-13
3D spam?
What else can I add!
P.S. This was pretty much Andy Lowe's first suggestion on hearing I had a 3D printer.
2011-07-12
How BT should do it?
This is a serious suggestion.
1. It should be possible for an end user to buy the access link from BT (or other local loop provider) to get them from their premises to the exchange, and then separately pay for the IP and/or voice services from the exchange to the world.
2. The access link should include active NTE as part of the service with a handover such as Ethernet for data or analogue pair for voice that can be tested to and beyond as part of that access service. i.e. BT could tell the link from exchange to end user has packet loss, and even test out from the Ethernet port to the end user kit.
Doing this would solve all sorts of issues.
(a) price - as ISPs pay BT or other local loop providers now, this removes part of that step and means the end user pays directly. This should mean no real difference in cost.
(b) choice - the access link often has install costs and minimum terms, and making ISPs pay these causes problems with costs for migration and changes. A system of pay local loop provider for access and other companies for IP and voice means choice of back end providers, even more than one at a time so you can switch between as you need.
(c) test and repair - a system to test the access line to the active NTE and even beyond (Ethernet pair tests from the NTE) means that there is no issue over SFI charges and all that crap. If the line is good, BT can prove it. If not they can fix it. No unknown kit on the end of the line that means it could be end user fault.
(d) new service like FTTP have single access with multiple IP multiple voice on the NTE. This would allow the costs to work. Right now the costs only work if the voice and IP providers are the same else both are paying BT for the access.
(e) This would allow all sorts of innovative data and voice services without the access link overhead - so trial of a new ISP would be normal and free as ISP has not minimum terms or install costs to pay.
(f) It is like the well proven dialup model - people paid for line and calls to BT - it even allowed "free" ISPs to evolve.
(g) The service could standardise the handover at the exchange to a GEA (gig ethernet) with Ethernet link to end user and SIP handover for voice. This could apply for conventional PSTN where media convert is in the exchange, and for FTTP where it is at the NTE. It could allow ADSL1, ADSL2+, FTTC, and FTTP, and even EAD, all on the same platform.
1. It should be possible for an end user to buy the access link from BT (or other local loop provider) to get them from their premises to the exchange, and then separately pay for the IP and/or voice services from the exchange to the world.
2. The access link should include active NTE as part of the service with a handover such as Ethernet for data or analogue pair for voice that can be tested to and beyond as part of that access service. i.e. BT could tell the link from exchange to end user has packet loss, and even test out from the Ethernet port to the end user kit.
Doing this would solve all sorts of issues.
(a) price - as ISPs pay BT or other local loop providers now, this removes part of that step and means the end user pays directly. This should mean no real difference in cost.
(b) choice - the access link often has install costs and minimum terms, and making ISPs pay these causes problems with costs for migration and changes. A system of pay local loop provider for access and other companies for IP and voice means choice of back end providers, even more than one at a time so you can switch between as you need.
(c) test and repair - a system to test the access line to the active NTE and even beyond (Ethernet pair tests from the NTE) means that there is no issue over SFI charges and all that crap. If the line is good, BT can prove it. If not they can fix it. No unknown kit on the end of the line that means it could be end user fault.
(d) new service like FTTP have single access with multiple IP multiple voice on the NTE. This would allow the costs to work. Right now the costs only work if the voice and IP providers are the same else both are paying BT for the access.
(e) This would allow all sorts of innovative data and voice services without the access link overhead - so trial of a new ISP would be normal and free as ISP has not minimum terms or install costs to pay.
(f) It is like the well proven dialup model - people paid for line and calls to BT - it even allowed "free" ISPs to evolve.
(g) The service could standardise the handover at the exchange to a GEA (gig ethernet) with Ethernet link to end user and SIP handover for voice. This could apply for conventional PSTN where media convert is in the exchange, and for FTTP where it is at the NTE. It could allow ADSL1, ADSL2+, FTTC, and FTTP, and even EAD, all on the same platform.
"up to"
Why the public seem not to understand this simple term is beyond me, but it seems to be the case.
The technology for DSL allows speeds that adapt to the line conditions and so you will get a speed depending on line length and quality. The technology itself has different types, so ADSL1 could get up to 8128Kb/s sync which 7.15Mb/s IP rate roughly when allowing for various overheads in the protocols. ADSL2+ gets you up to around 20 to 21Mb/s IP rate at maximum sync possible. FTTC is higher still.
So, obviously, ISPs advertised services as "up to 20Mb/s". The full rate is possible, but you have to be pretty close to the exchange. Typically people get lower rates.
The problem is that for some reason people felt cheated if their line only gets 6Mb/s, for example. Some how people read "up to 20Mb/s" as "at least 20Mb/s" when it means the opposite. In fact if I bought an "up to 20Mb/s" service and got 21Mb/s then that would be false advertising!
So OFCOM have started asking ISPs not to say "up to 20Mb/s". You will note the A&A site says things like "sync rates of not more than 24Mb/s" for ADSL2+. I.e. saying "not more than" instead of "up to", even though clearly the same meaning. Also, as our pricing is not based on speed this is buried in the detail of the specific service and we have a page explaining overheads and so on.
Of course, like all ISPs, if you put a postcode or line number on the web site we tell you a fairly realistic estimate of speeds based on BT line checker data. OFCOMs code of practice is however totally crazy as I think I have ranted before.
What I just spotted today was a TV advert for broadband from our favourite telco. They have been just as sneaky by saying "we give you a personalised speed estimate, up to 20meg". So they are still saying "up to 20Mb/s" just saying that the personalised speed estimate will tell you a speed up to 20Mb/s not that the line will go up to 20Mb/s.
If the public felt misled before I cannot see how this subtle change really makes any difference, and I have to wonder how much time and effort (i.e. taxpayer's money) went on this.
Oh well.
The technology for DSL allows speeds that adapt to the line conditions and so you will get a speed depending on line length and quality. The technology itself has different types, so ADSL1 could get up to 8128Kb/s sync which 7.15Mb/s IP rate roughly when allowing for various overheads in the protocols. ADSL2+ gets you up to around 20 to 21Mb/s IP rate at maximum sync possible. FTTC is higher still.
So, obviously, ISPs advertised services as "up to 20Mb/s". The full rate is possible, but you have to be pretty close to the exchange. Typically people get lower rates.
The problem is that for some reason people felt cheated if their line only gets 6Mb/s, for example. Some how people read "up to 20Mb/s" as "at least 20Mb/s" when it means the opposite. In fact if I bought an "up to 20Mb/s" service and got 21Mb/s then that would be false advertising!
So OFCOM have started asking ISPs not to say "up to 20Mb/s". You will note the A&A site says things like "sync rates of not more than 24Mb/s" for ADSL2+. I.e. saying "not more than" instead of "up to", even though clearly the same meaning. Also, as our pricing is not based on speed this is buried in the detail of the specific service and we have a page explaining overheads and so on.
Of course, like all ISPs, if you put a postcode or line number on the web site we tell you a fairly realistic estimate of speeds based on BT line checker data. OFCOMs code of practice is however totally crazy as I think I have ranted before.
What I just spotted today was a TV advert for broadband from our favourite telco. They have been just as sneaky by saying "we give you a personalised speed estimate, up to 20meg". So they are still saying "up to 20Mb/s" just saying that the personalised speed estimate will tell you a speed up to 20Mb/s not that the line will go up to 20Mb/s.
If the public felt misled before I cannot see how this subtle change really makes any difference, and I have to wonder how much time and effort (i.e. taxpayer's money) went on this.
Oh well.
Migraine
I think I have only ever had one migraine before and it was bloody strange.
Now I have another, and at least I know it has happened before and will pass.
What is strange is that it is a visual effect, and this time it has started right in the middle of my vision (so I am kind of typing this blind). It does mean I can actually see the effect this time though, as last time it was slighlty off centre and you can't look at something off centre when it is all in your head.
It appears to be both eyes, i.e. the effect is clearly not actually in my eyes but in my brain.
The best way I can describe it is like looking through some sort of strange cut glass, with lots of sharp angles and colours all in front of what you are looking at. The colours are like the colours on the edge of a prism - sort of not quite really there. It started as a dot and is getting gradually bigger and bigger. It is flashing, and I am sure I have seen the effect on an old BBC micro before now with the alternative red/green flashing colours and random triangle plots.
Also, a headache - not too bad, but getting worse, and a feeling of being spaced out.
I would go home, but I cannot see that working well on a bicycle.
It is damn strange trying to read this as I have to read it off centre with what vision I still have.
Last time it lasted maybe half an hour. Lets hope not too long this time.
Hmm, now it is bigger I can tell it is only my left vision, and the headache is only on the right.
Now I have another, and at least I know it has happened before and will pass.
What is strange is that it is a visual effect, and this time it has started right in the middle of my vision (so I am kind of typing this blind). It does mean I can actually see the effect this time though, as last time it was slighlty off centre and you can't look at something off centre when it is all in your head.
It appears to be both eyes, i.e. the effect is clearly not actually in my eyes but in my brain.
The best way I can describe it is like looking through some sort of strange cut glass, with lots of sharp angles and colours all in front of what you are looking at. The colours are like the colours on the edge of a prism - sort of not quite really there. It started as a dot and is getting gradually bigger and bigger. It is flashing, and I am sure I have seen the effect on an old BBC micro before now with the alternative red/green flashing colours and random triangle plots.
Also, a headache - not too bad, but getting worse, and a feeling of being spaced out.
I would go home, but I cannot see that working well on a bicycle.
It is damn strange trying to read this as I have to read it off centre with what vision I still have.
Last time it lasted maybe half an hour. Lets hope not too long this time.
Hmm, now it is bigger I can tell it is only my left vision, and the headache is only on the right.
2011-07-11
Gearing up
The 3D printer kit I got (shapercube) came with several "printed" parts, including two gears used in the extruder.
The extruder is the bit that pushes the plastic in to the hot end and this is a "Wade" type extruder which includes gears from stepper to the hobbed bolt that actually forces the plastic in to the hot end.
The gears are described as 11t17p and 39t17p. Well, this weekend, one of the teeth snapped off. It was just working well enough to print a new 11t17p and carry on, but naturally I felt I could make better gears.
The 11t and 39t is easy - "number of teeth", but the 17p fooled me. p is pitch, so I assumed it was circular pitch - i.e. tooth to tooth spacing along the pitch circle on the gears. I got close with 0.17" and even 17 pixels (where pixel is 1/90th inch). Not quite though. Turns out the unit is 17 teeth per inch diameter, so 17 per π" so no wonder I could not work it out.
The results are on thingiverse now. They work well. I made chevron style self centering gears, and I made the gears have 45 degree edges top and bottom for easy printing and nice appearance. They mesh well with no slack. Within minutes of posting I have several downloads.
Next replacement part - a Z axis wobble arrestor!!!
The extruder is the bit that pushes the plastic in to the hot end and this is a "Wade" type extruder which includes gears from stepper to the hobbed bolt that actually forces the plastic in to the hot end.
The gears are described as 11t17p and 39t17p. Well, this weekend, one of the teeth snapped off. It was just working well enough to print a new 11t17p and carry on, but naturally I felt I could make better gears.
The 11t and 39t is easy - "number of teeth", but the 17p fooled me. p is pitch, so I assumed it was circular pitch - i.e. tooth to tooth spacing along the pitch circle on the gears. I got close with 0.17" and even 17 pixels (where pixel is 1/90th inch). Not quite though. Turns out the unit is 17 teeth per inch diameter, so 17 per π" so no wonder I could not work it out.
The results are on thingiverse now. They work well. I made chevron style self centering gears, and I made the gears have 45 degree edges top and bottom for easy printing and nice appearance. They mesh well with no slack. Within minutes of posting I have several downloads.
Next replacement part - a Z axis wobble arrestor!!!
Arrrg OFCOM, again
OK, there is a rule (general condition 4) that says we have to make location data available to emergency services for 999 calls (and 112).
My reading of it is pretty simple in that (a) there is no location data in the signalling system, so nothing to send, and (b) we only have to make available which could be a web site if we wanted - nothing requires us to even "agree technical standards".
OFCOMs view is very different, though they fail to explain why in any sensible detail. They consider we have to update BT's database for emergency calls (based on CLI). Well, we could use C&W database and routing instead, they say. But still, we are expected to send (not "make available") the data in a specific way to a specific system.
Lets try and meet that new and unwritten requirement shall we?
Catch is that for a load of the numbers we have we can't do that. The issue is we have two carriers and number hosters and a load of numbers actually from one of the carriers. We only use one carrier for 999 calls. We can only update the numbers we have hosted with them on to BT's database (for now). So a whole load of number we cannot yet update. We have around 4 million numbers we can and a few thousand we cannot.
There is a long term plan (which could be months to sort this) allowing us to update for any of our numbers via the one carrier. However OFCOM are bullying. No amount of "we do actually comply anyway so stop bullying" is working.
So I had a cunning plan. A simple plan. It is a sort of NAT for phone numbers (someone on irc suggested it was Customer user Number Translation or CNT for short). But basically any numbers I cannot update gets an 0200 number allocated and that is updated in the BT database. Calls to 999 present the 0200 number, and they get the location data. (0200 is a special code for hidden numbers in the network, so ideal for this).
Job done - compliant - all end users calling 999 can have location data (if they set it).
Is that good enough for OFCOM?
No, of course not. But I cannot see how it is not compliant. They seem happy with it as a temporary solution. It makes no sense - either it is compliant or it is not. There is no "temporary" to it!
We will have to see what they say.
My reading of it is pretty simple in that (a) there is no location data in the signalling system, so nothing to send, and (b) we only have to make available which could be a web site if we wanted - nothing requires us to even "agree technical standards".
OFCOMs view is very different, though they fail to explain why in any sensible detail. They consider we have to update BT's database for emergency calls (based on CLI). Well, we could use C&W database and routing instead, they say. But still, we are expected to send (not "make available") the data in a specific way to a specific system.
Lets try and meet that new and unwritten requirement shall we?
Catch is that for a load of the numbers we have we can't do that. The issue is we have two carriers and number hosters and a load of numbers actually from one of the carriers. We only use one carrier for 999 calls. We can only update the numbers we have hosted with them on to BT's database (for now). So a whole load of number we cannot yet update. We have around 4 million numbers we can and a few thousand we cannot.
There is a long term plan (which could be months to sort this) allowing us to update for any of our numbers via the one carrier. However OFCOM are bullying. No amount of "we do actually comply anyway so stop bullying" is working.
So I had a cunning plan. A simple plan. It is a sort of NAT for phone numbers (someone on irc suggested it was Customer user Number Translation or CNT for short). But basically any numbers I cannot update gets an 0200 number allocated and that is updated in the BT database. Calls to 999 present the 0200 number, and they get the location data. (0200 is a special code for hidden numbers in the network, so ideal for this).
Job done - compliant - all end users calling 999 can have location data (if they set it).
Is that good enough for OFCOM?
No, of course not. But I cannot see how it is not compliant. They seem happy with it as a temporary solution. It makes no sense - either it is compliant or it is not. There is no "temporary" to it!
We will have to see what they say.
2011-07-08
Regulating ISPs
Well, I have been harping on about location data for emergency services for our VoIP services and fun and games with OFCOM.
What was interesting yesterday was learning more about where this is going over the next few years.
From a technical point of view, a VoIP provider cannot normally locate a customer as all they really have is the IP address details of the far end. The internet acts as a long extension cable connecting the end user to the VoIP provider. Expecting the VoIP provider to locate the caller is not practical.
So the plan is pretty simple - VoIP provider gives details of IP/session to the emergency services and they ask the ISP for details of the location, in real time. The ISP is technically in a much better position to provide location data.
There are a whole load of technical issues with this, and lots of edge cases (don't even mention TOR, or NAT64). However, for a lot of cases, even dynamic allocation with DSL, it is technically possible for an ISP to identify the endpoint for an IP in real time. After all they have to route packets in real time, so that makes sense.
The problem really is that this is a technical committee trying to find a technical solution. They do not have the remit of trying to work out how the hell this happens in practice from a legal point of view, or the policy implications. This sort of division of considerations is not uncommon, but it could mean some interesting times ahead for ISPs, especially small ISPs.
One of the big issues is that this is complicated for an ISP to do. A lot of ISPs, especially small ISPs, buy in the various kit to make it all happen and it all works well together as you would expect. But this sort of location lookup needs more than just BGP and RADIUS. It needs integration with ordering systems and billing systems, and much more. These are often home made or even manual systems in small ISPs and simply not set up for real time queries. The variation from ISP to ISP makes an off the shelf solution difficult. Oh, and the final catch is the 99.999% reliability requirement which is probably something that no ISP can really guarantee.
So, this really will only happen if there is regulation requiring it to happen - given how well OFCOM worded GC4, I can see that being a nightmare. As an ISP I don't like any new regulation, and it is clearly unfair on the ISP if they have to do the work for someone else's VoIP service. After all, the ISP is just passing packets. Why are they tied up with onerous voice regulation just because someone else is sending voice packets over their network, any more than they are tied up with banking regulations because on-line banking goes over their network. Though yes, like us, many ISPs also do VoIP, that is not always the case. At the moment ISPs can choose not to get involved in all this 999/112 hardship by not doing VoIP, but that seems likely to change.
Of course, one of the other issues is that, assuming it is regulated for and enforced, all ISPs will have a handy real-time IP to location lookup service. It is, of course, only for emergency services. But can you image that government departments will not want to get their hands on something so useful? How long before the copyright industry lobby for access (after all, the ISPs are already doing it at that point, so no extra cost)? Before you know it we will have councils using the interface to track kids playing truant from school.
And then, of course, what of ISPs that sell the service. It is bad enough geo-location services guessing you might want to meet girls in low earth orbit but what if they actually had your full address based on your IP? I am sure the DPA would have something to say, but some contracts already seem to allow crazy data sharing like this. Commercialy it may be the only way for the ISP to recover some of the money spent making such a system.
The only light at the end of the tunnel is that there is an idea for the next stage - where smarter VoIP devices find where they are (GPS for example). This could involve protocols for devices to ask their DHCP server or upstream servers for their own location, which is easier to do and clearly has less privacy issues. The device then sends the information with the call and that gets passed through to emergency services. I'd like to see us moving to that type of solution and not implementing a huge can of worms by imposing all sorts of new requirements on ISPs... We'll have to see what happens.
Interesting times ahead.
What was interesting yesterday was learning more about where this is going over the next few years.
From a technical point of view, a VoIP provider cannot normally locate a customer as all they really have is the IP address details of the far end. The internet acts as a long extension cable connecting the end user to the VoIP provider. Expecting the VoIP provider to locate the caller is not practical.
So the plan is pretty simple - VoIP provider gives details of IP/session to the emergency services and they ask the ISP for details of the location, in real time. The ISP is technically in a much better position to provide location data.
There are a whole load of technical issues with this, and lots of edge cases (don't even mention TOR, or NAT64). However, for a lot of cases, even dynamic allocation with DSL, it is technically possible for an ISP to identify the endpoint for an IP in real time. After all they have to route packets in real time, so that makes sense.
The problem really is that this is a technical committee trying to find a technical solution. They do not have the remit of trying to work out how the hell this happens in practice from a legal point of view, or the policy implications. This sort of division of considerations is not uncommon, but it could mean some interesting times ahead for ISPs, especially small ISPs.
One of the big issues is that this is complicated for an ISP to do. A lot of ISPs, especially small ISPs, buy in the various kit to make it all happen and it all works well together as you would expect. But this sort of location lookup needs more than just BGP and RADIUS. It needs integration with ordering systems and billing systems, and much more. These are often home made or even manual systems in small ISPs and simply not set up for real time queries. The variation from ISP to ISP makes an off the shelf solution difficult. Oh, and the final catch is the 99.999% reliability requirement which is probably something that no ISP can really guarantee.
So, this really will only happen if there is regulation requiring it to happen - given how well OFCOM worded GC4, I can see that being a nightmare. As an ISP I don't like any new regulation, and it is clearly unfair on the ISP if they have to do the work for someone else's VoIP service. After all, the ISP is just passing packets. Why are they tied up with onerous voice regulation just because someone else is sending voice packets over their network, any more than they are tied up with banking regulations because on-line banking goes over their network. Though yes, like us, many ISPs also do VoIP, that is not always the case. At the moment ISPs can choose not to get involved in all this 999/112 hardship by not doing VoIP, but that seems likely to change.
Of course, one of the other issues is that, assuming it is regulated for and enforced, all ISPs will have a handy real-time IP to location lookup service. It is, of course, only for emergency services. But can you image that government departments will not want to get their hands on something so useful? How long before the copyright industry lobby for access (after all, the ISPs are already doing it at that point, so no extra cost)? Before you know it we will have councils using the interface to track kids playing truant from school.
And then, of course, what of ISPs that sell the service. It is bad enough geo-location services guessing you might want to meet girls in low earth orbit but what if they actually had your full address based on your IP? I am sure the DPA would have something to say, but some contracts already seem to allow crazy data sharing like this. Commercialy it may be the only way for the ISP to recover some of the money spent making such a system.
The only light at the end of the tunnel is that there is an idea for the next stage - where smarter VoIP devices find where they are (GPS for example). This could involve protocols for devices to ask their DHCP server or upstream servers for their own location, which is easier to do and clearly has less privacy issues. The device then sends the information with the call and that gets passed through to emergency services. I'd like to see us moving to that type of solution and not implementing a huge can of worms by imposing all sorts of new requirements on ISPs... We'll have to see what happens.
Interesting times ahead.
2011-07-06
3D printing of important things
What could be more important that the office coffee machine?
It holds a handful of beans normally and we are forever filling it up.
So I have printed this handy extension adding 40mm more height and holding a lot more beans.
The fun bit is that it took 3 hours to print. I have strung a broom handle from hooks on the ceiling to hold the reels of plastic. This meant, for a change, that this 3 hour print needed no intevention from me at all - it just worked. Needless to say that did not stop me watching it like a hawk waiting to see what was going to go wrong.
The end result fits perfectly and has a really solid 3mm thick wall.
(Printed in thanslucent PLA at 215C and 50mm/s feed rate)
P.S. Yes, 5 different people have told me about the BBC article on 3D chocolate printing.
It holds a handful of beans normally and we are forever filling it up.
So I have printed this handy extension adding 40mm more height and holding a lot more beans.
The fun bit is that it took 3 hours to print. I have strung a broom handle from hooks on the ceiling to hold the reels of plastic. This meant, for a change, that this 3 hour print needed no intevention from me at all - it just worked. Needless to say that did not stop me watching it like a hawk waiting to see what was going to go wrong.
The end result fits perfectly and has a really solid 3mm thick wall.
(Printed in thanslucent PLA at 215C and 50mm/s feed rate)
P.S. Yes, 5 different people have told me about the BBC article on 3D chocolate printing.
2011-07-05
Top posting in emails
Naturally we send emails correctly, trimming and quoting the original email, and including our replies in chronological order in plain text. It is company policy as well as common sense (IMHO).
However, the comment from someone saying they are not ignoring emails "... however we are facing difficulty in reading your reply and to find where is your reply in the mail as the format is different."
I had to read that a few times to make any sense of it. They top posted their comments, of course. Sounds like they cannot understand anything but top posted replies.
I could not possibly say who sent it, favourite or not.
However, the comment from someone saying they are not ignoring emails "... however we are facing difficulty in reading your reply and to find where is your reply in the mail as the format is different."
I had to read that a few times to make any sense of it. They top posted their comments, of course. Sounds like they cannot understand anything but top posted replies.
I could not possibly say who sent it, favourite or not.
2011-07-04
Arrrr!
Thrall was being a Bouncy Castle Guardian for the day, and was not impressed with having to dress like a pirate.
Bobby is 3! Fun was had by all.
A bouncy castle full of 3 year olds probably counts as a Horde.
Thrall did get new teeth though. Seems his previous dentist provided blu-tac teeth. Now he has 3D printed PLA teeth.
Bobby is 3! Fun was had by all.
A bouncy castle full of 3 year olds probably counts as a Horde.
Thrall did get new teeth though. Seems his previous dentist provided blu-tac teeth. Now he has 3D printed PLA teeth.
2011-07-01
Colour coded
OK, well, getting the hang of it now.
ABS and PLA are different types of plastic, so different characteristics and in particular temperatures. PLA works on lower temps than ABS. It expands (and hence contracts) less. PLA is, IMHO easier. The ABS needs a hot bed at least 100C and maybe more. PLA will stick to 60C hot bed.
What surprised me is that different colour ABS is very different.
The pink ABS (yes, the web site said magenta and no, it is clearly pink isn't it) seems more than happy at 245C or even 250C. It actually turns purple when hot, which is (a) pretty, and (b) useful to see WTF is going on when printing. Returns to pink as it cools.
However, to my surprise, the red ABS is not happy at 245C. It is almost liquid at that temp, and before you know it you have a hot end full of boiled plastic and bubbles. It is happy at 220C. It is also shiny. The pink ABS is matt.
I had no idea different colours behaved so differently!
P.S. The red one is a salt pot, and works!
ABS and PLA are different types of plastic, so different characteristics and in particular temperatures. PLA works on lower temps than ABS. It expands (and hence contracts) less. PLA is, IMHO easier. The ABS needs a hot bed at least 100C and maybe more. PLA will stick to 60C hot bed.
What surprised me is that different colour ABS is very different.
The pink ABS (yes, the web site said magenta and no, it is clearly pink isn't it) seems more than happy at 245C or even 250C. It actually turns purple when hot, which is (a) pretty, and (b) useful to see WTF is going on when printing. Returns to pink as it cools.
However, to my surprise, the red ABS is not happy at 245C. It is almost liquid at that temp, and before you know it you have a hot end full of boiled plastic and bubbles. It is happy at 220C. It is also shiny. The pink ABS is matt.
I had no idea different colours behaved so differently!
P.S. The red one is a salt pot, and works!
Hmmm, OFCOM
Well, I think we are making progress at least. Had a conference call today. They don't even know how to format a London phone number!
I think they are happy we are trying to do the right thing and work to the spirit of general condition 4, specifically we do provide location data to emergency services, where customers provide it, for most numbers, and expect soon to be able to do for all numbers.
My gripe, and continued annoyance, is they keep saying we are not currently compliant and why is it taking so long, and so on.
Some how "make available" turns in to a much more complex requirement. All they had to say was "make available in a format and by means as agreed with the emergency services" and it would have been what they actually wanted. They say things like that in other conditions. It seems "putting on a web page" is not "making available", which is odd really.
Also, somehow, data processed by the electronic communications network becomes data available to the communications provider or some such. Well, the Comms Act is quite clear on the definition, and it is a technical one as a "transmission system" which uses signal, so I think they are quite wrong on that. There is no data in the transmission system identifying the location of the calling party terminal equipment. So we are complying with GC4, and much more.
However, seems we may make progress on location data for remaining numbers. We have several ways to tackle it. Some quicker than others. We'll see which is the most practical.
I just don't like being bullied (even by OFCOM), even more so when we are actually trying to do the right thing, as fast as our suppliers let us.
I think they are happy we are trying to do the right thing and work to the spirit of general condition 4, specifically we do provide location data to emergency services, where customers provide it, for most numbers, and expect soon to be able to do for all numbers.
My gripe, and continued annoyance, is they keep saying we are not currently compliant and why is it taking so long, and so on.
Some how "make available" turns in to a much more complex requirement. All they had to say was "make available in a format and by means as agreed with the emergency services" and it would have been what they actually wanted. They say things like that in other conditions. It seems "putting on a web page" is not "making available", which is odd really.
Also, somehow, data processed by the electronic communications network becomes data available to the communications provider or some such. Well, the Comms Act is quite clear on the definition, and it is a technical one as a "transmission system" which uses signal, so I think they are quite wrong on that. There is no data in the transmission system identifying the location of the calling party terminal equipment. So we are complying with GC4, and much more.
However, seems we may make progress on location data for remaining numbers. We have several ways to tackle it. Some quicker than others. We'll see which is the most practical.
I just don't like being bullied (even by OFCOM), even more so when we are actually trying to do the right thing, as fast as our suppliers let us.
It's REINing
Well, makes a change to be on the receiving end...
Seems a power supply for a phone was cooking gently. I guess I have to thank our neighbour for reporting poor ADSL to BT and getting a REIN engineer involved as it probably saved us from a fire.
This was causing bad enough RF interference to wipe out DSL several houses away.
Same engineer that solved a REIN issue when I had ADSL many years ago, and one of the good guys in BT.
P.S. REIN is the BT term for the interference that affects your boradband. Stands for Random Electrical Impulse Noise.
Seems a power supply for a phone was cooking gently. I guess I have to thank our neighbour for reporting poor ADSL to BT and getting a REIN engineer involved as it probably saved us from a fire.
This was causing bad enough RF interference to wipe out DSL several houses away.
Same engineer that solved a REIN issue when I had ADSL many years ago, and one of the good guys in BT.
P.S. REIN is the BT term for the interference that affects your boradband. Stands for Random Electrical Impulse Noise.
Subscribe to:
Posts (Atom)
FB9000
I know techies follow this, so I thought it was worth posting and explaining... The FB9000 is the latest FireBrick. It is the "ISP...
-
Broadband services are a wonderful innovation of our time, using multiple frequency bands (hence the name) to carry signals over wires (us...
-
For many years I used a small stand-alone air-conditioning unit in my study (the box room in the house) and I even had a hole in the wall fo...
-
It seems there is something of a standard test string for anti virus ( wikipedia has more on this). The idea is that systems that look fo...