Discussion:
[geda-user] pour clearing around pads
Dave Curtis
2014-07-06 04:11:18 UTC
Permalink
I'm working on a footprint where if I follow the data-sheet geometry and
my normal design rules, I end up with a footprint where very skinny
copper peninsulas sneak between pads when it is placed in a polygon.
The peninsulas neck down to less than the minimum copper width rule.

So, first off, I'm surprised that the Cu polygon allows Cu to pour into
a space less than the minimum width rule.

Secondly, I'm wondering if fab houses might flag that as a DRC violation
even if pcb doesn't.

Third, is it legal to specify zero-width Pad[] elements in a footprint,
and assign clearance values, in order to composite some clearance into
the footprint? Or is pcb going to get cranky if a Pad[] is zero width? I
could imagine it would be hard to connect to... so I guess it needs to
share a pin number with something else.

footprint work-in-progress attached.
Lilith Bryant
2014-07-06 04:41:27 UTC
Permalink
Post by Dave Curtis
I'm working on a footprint where if I follow the data-sheet geometry and
my normal design rules, I end up with a footprint where very skinny
copper peninsulas sneak between pads when it is placed in a polygon.
The peninsulas neck down to less than the minimum copper width rule.
So, first off, I'm surprised that the Cu polygon allows Cu to pour into
a space less than the minimum width rule.
Secondly, I'm wondering if fab houses might flag that as a DRC violation
even if pcb doesn't.
I have had a fab house complain about this. I ended up telling them to just
ignore it, and so far it hasn't been an issue but it does make me a little nervous,
particularly what might happen to fine slivers that get undercut during etching.

Also, I don't do RF stuff, so ending up with unconnected islands is not a issue for
me.

I have been meaning to write a polygon "bake" tool that fixes this, but have
been put off by the lack of a workable polygon library. Pretty much just needs
to erode then dilate by the minimum clearance. Was going to use shapely/GEOS
in python, but it's erosion doesn't seem to work :( So it went into the too hard
basket for the time being.
Eric Brombaugh
2014-07-06 05:05:36 UTC
Permalink
I'm working on a footprint where if I follow the data-sheet geometry and my normal design rules, I end up with a footprint where very skinny copper peninsulas sneak between pads when it is placed in a polygon. The peninsulas neck down to less than the minimum copper width rule.
So, first off, I'm surprised that the Cu polygon allows Cu to pour into a space less than the minimum width rule.
Secondly, I'm wondering if fab houses might flag that as a DRC violation even if pcb doesn't.
This has been a problem with PCB poly fill for as long as I've been using it. I've never had a fab house complain about it, but I have had boards come back with shorts due to the thin copper fingers lifting and drifting around under the mask.

PCB's poly fill algorithm is generally a disaster - it leaves large regions unfilled if there are intervening traces and pads, often crashes the application when used in complex geometries and as seen above results in significant DRC violations. The only way around this is to carefully subdivide the polygons into smaller regions by hand in order to work around the blockages, but that results in designs with dozens if not hundreds of small polygons that become difficult to maintain. Other FOSS pcb design tools seem to handle this function more gracefully - see how KiCAD does it for example.

Having vented, I would note that I'm a fan of PCB and I continue to use it happily despite this wart. I suppose if it bothered me enough I'd make the effort to understand how this works and try to fix it, but there always seems to be something more fun to do.

Eric
DJ Delorie
2014-07-06 05:16:31 UTC
Permalink
Post by Dave Curtis
The peninsulas neck down to less than the minimum copper width rule.
I typically expand the pad clearances until such necks vanish.
Post by Dave Curtis
So, first off, I'm surprised that the Cu polygon allows Cu to pour into
a space less than the minimum width rule.
Polygon pours are handled poorly in pcb.
Post by Dave Curtis
Secondly, I'm wondering if fab houses might flag that as a DRC violation
even if pcb doesn't.
Some might. I've had one break loose and cause a short in a
manufactured board before, so I'm particularly wary of them.
Post by Dave Curtis
Third, is it legal to specify zero-width Pad[] elements in a footprint,
and assign clearance values, in order to composite some clearance into
the footprint?
I think this is fine, although perhaps a tiny non-zero width might be
needed. I don't know if these cause outputs in the gerber file,
though, so be careful.
Dave Curtis
2014-07-06 15:05:34 UTC
Permalink
Post by DJ Delorie
Post by Dave Curtis
The peninsulas neck down to less than the minimum copper width rule.
I typically expand the pad clearances until such necks vanish.
On this footprint that would be slightly annoying, although workable. I
did consider doing that.
Post by DJ Delorie
Post by Dave Curtis
Third, is it legal to specify zero-width Pad[] elements in a footprint,
and assign clearance values, in order to composite some clearance into
the footprint?
I think this is fine, although perhaps a tiny non-zero width might be
needed. I don't know if these cause outputs in the gerber file,
though, so be careful.
Is there a reliable way to validate that zero-width pads are usable? I
was thinking that this might be a good way to deal with the gang mask
problem as well.

I'm thinking that a reasonable way to specify clearance/mask features
that don't have associated copper is:

1. Draw a Pad[] with zero width, but with clearance/mask set create
desired relief.
2. Give the Pad[] a pin number that is *not* used in the part, that way
it will not show up in the netlist and cause rat/routing/connectivity
confusion.
DJ Delorie
2014-07-06 17:08:06 UTC
Permalink
Post by Dave Curtis
Is there a reliable way to validate that zero-width pads are usable?
Code review.
Post by Dave Curtis
1. Draw a Pad[] with zero width, but with clearance/mask set create
desired relief.
2. Give the Pad[] a pin number that is *not* used in the part, that way
it will not show up in the netlist and cause rat/routing/connectivity
confusion.
I suspect that a zero-width pad is still "a pad" according to parts of
the code, so it will still block traces and cause shorts despite being
zero-width.
Dave Curtis
2014-07-06 19:55:04 UTC
Permalink
Post by DJ Delorie
Post by Dave Curtis
Is there a reliable way to validate that zero-width pads are usable?
Code review.
Where is the canonical git repo these day? It's been a while since I've
looked at it.
Post by DJ Delorie
Post by Dave Curtis
1. Draw a Pad[] with zero width, but with clearance/mask set create
desired relief.
2. Give the Pad[] a pin number that is *not* used in the part, that way
it will not show up in the netlist and cause rat/routing/connectivity
confusion.
I suspect that a zero-width pad is still "a pad" according to parts of
the code, so it will still block traces and cause shorts despite being
zero-width.
Shouldn't it block traces? Isn't that the point of clearance? Although
I guess if you set thickness to zero to try to create a gang mask it
would be annoying if it blocked traces.

Conceptually, it seems like a "zero width pad" solves a couple of
long-standing annoyances. It could be used to clean up the kind of
clearance issue I'm having here, and it could also create gang masks.
If people agreed, I could see a couple of approaches...

1. Declare that a zero width pad is a valid primitive that does not
impact connectivity in any way, is guaranteed not to render on a copper
layer, but allows either or both of thickness and mask-relief to be
non-zero. The current code might do the right thing in a lot of places,
but long term I suspect this would create a lot of special-case checks
in the code and of course becomes burdensome legacy in the spec. It
does not require any new footprint parsing code.

2. Create new primitive(s) for footprint files, perhaps Clear[] and
Mask[], that can attack the problems directly. It also strikes me that
a Keepout[] primitive might share some behavior of Clear[]... it blocks
routing but doesn't draw anti-copper.
DJ Delorie
2014-07-06 20:04:16 UTC
Permalink
Post by Dave Curtis
Where is the canonical git repo these day? It's been a while since I've
looked at it.
git.geda-project.org
Post by Dave Curtis
Shouldn't it block traces? Isn't that the point of clearance?
Although I guess if you set thickness to zero to try to create a
gang mask it would be annoying if it blocked traces.
Polygon clearance can be larger than the line space rule, too.
Post by Dave Curtis
Conceptually, it seems like a "zero width pad" solves a couple of
long-standing annoyances. It could be used to clean up the kind of
clearance issue I'm having here, and it could also create gang masks.
If people agreed, I could see a couple of approaches...
The "Right" way is to have a separate layer for extra clearance, but
pcb isn't designed to handle that.
Dave Curtis
2014-07-06 22:05:35 UTC
Permalink
Post by DJ Delorie
Polygon clearance can be larger than the line space rule, too.
polygon-to-pad different from track-to-pad, is that what you mean? I'm
confused.
Post by DJ Delorie
Post by Dave Curtis
Conceptually, it seems like a "zero width pad" solves a couple of
long-standing annoyances. It could be used to clean up the kind of
clearance issue I'm having here, and it could also create gang masks.
If people agreed, I could see a couple of approaches...
The "Right" way is to have a separate layer for extra clearance, but
pcb isn't designed to handle that.
Well, yes. I vaguely remember from the last time I looked at pcb that
adding a new layer type is quite pervasive. I was hoping one of the
other approaches would fit better with existing infrastructure and
provide a way forward that didn't require so much heavy lifting....
along the philosophy of "there's a right way, a wrong way, and the pcb
way...."
Gabriel Paubert
2014-07-07 06:41:33 UTC
Permalink
Post by DJ Delorie
Post by Dave Curtis
The peninsulas neck down to less than the minimum copper width rule.
I typically expand the pad clearances until such necks vanish.
I did this until holes were added to polygons. Now I use holes to
precisley control where the copper pour stops. But holes have
they own problems (moving them, mostly, they are rigidly linked
to the containing polygon), so I only draw them as the last step,
when everything else is essentially ready for production.
Post by DJ Delorie
Post by Dave Curtis
So, first off, I'm surprised that the Cu polygon allows Cu to pour into
a space less than the minimum width rule.
Polygon pours are handled poorly in pcb.
That's an understatement. While it had other defects (and many), I really
liked how Orcad/PCB 386 worked in the mid 90s:

- polygon had a class (filled, no fill, and others for other purposes
I can't remember since I never used them)

- polygons had a Z order (an integer) to define the ordering in which they were
painted (so if you put a high order fill inside a middle order no-fill inside
a low order fill, you get what you expect).

- polygons had "seed points" to provide starting points for fills (no largest
area rule) and could have several of them (no need for an equivalent of
the MorphPolygon command, which has its own problems).

Note that the largest area rule is ambiguous on at least one of the boards
I designed a few years ago: I had a very symmetrical polygon for some
coplanar waveguide structure. When PCB was "upgraded" to the new polygon
dicer it arbitrarily chose one of the halves, but the symmetrical one
was absolutely identical. Actually I kept in old version of PCB for
some time because it caused me too many troubles.

There is anothe bug in PCB's polygon pours, I think that pours should
go through lines that don't have the "clearline" flag set. As far as I
can say, they don't.
Post by DJ Delorie
Post by Dave Curtis
Secondly, I'm wondering if fab houses might flag that as a DRC violation
even if pcb doesn't.
Some might. I've had one break loose and cause a short in a
manufactured board before, so I'm particularly wary of them.
Post by Dave Curtis
Third, is it legal to specify zero-width Pad[] elements in a footprint,
and assign clearance values, in order to composite some clearance into
the footprint?
I think this is fine, although perhaps a tiny non-zero width might be
needed. I don't know if these cause outputs in the gerber file,
though, so be careful.
I really recommend using polygon holes in this case, I did this before
holes were supported, and this was much worse, despite the defects
of the holes listed above.

Gabriel
Dave Curtis
2014-07-07 16:33:46 UTC
Permalink
Post by Gabriel Paubert
Post by DJ Delorie
Post by Dave Curtis
The peninsulas neck down to less than the minimum copper width rule.
I typically expand the pad clearances until such necks vanish.
I did this until holes were added to polygons. Now I use holes to
precisley control where the copper pour stops. But holes have
they own problems (moving them, mostly, they are rigidly linked
to the containing polygon), so I only draw them as the last step,
when everything else is essentially ready for production.
With my current problem, progress > beauty, so I expanded the pad
clearances :-/
Post by Gabriel Paubert
Post by DJ Delorie
Post by Dave Curtis
So, first off, I'm surprised that the Cu polygon allows Cu to pour into
a space less than the minimum width rule.
Polygon pours are handled poorly in pcb.
<snip>
Post by DJ Delorie
Post by Dave Curtis
Third, is it legal to specify zero-width Pad[] elements in a footprint,
and assign clearance values, in order to composite some clearance into
the footprint?
I think this is fine, although perhaps a tiny non-zero width might be
needed. I don't know if these cause outputs in the gerber file,
though, so be careful.
I really recommend using polygon holes in this case, I did this before
holes were supported, and this was much worse, despite the defects
of the holes listed above.
Worse how?

I spent a few minutes looking at the code. The gerber output has a
check in the aperture selection code where if a zero-width aperture is
requested, it returns NULL, which (if the comments are to be believed)
suppresses any output in the gerber file for zero-thickness elements.

I'm not sure where to look for how a zero-thickness pad might cause
phantom shorts or how it interacts with route blocking. Clues welcome.
Post by Gabriel Paubert
Gabriel
Thanks for your comments. Very helpful.

-dave
DJ Delorie
2014-07-07 18:00:38 UTC
Permalink
Post by Dave Curtis
I'm not sure where to look for how a zero-thickness pad might cause
phantom shorts or how it interacts with route blocking. Clues welcome.
The code considers a pad to be a line segment between two points, and
may do a "do these segments intersect" test independent of the "check
the width" test. djopt, I think, does this - intersection of segments
is a different test than the "happen to touch due to width" test.
onetmt
2014-07-06 08:10:49 UTC
Permalink
Post by Lilith Bryant
Post by Dave Curtis
I'm working on a footprint where if I follow the data-sheet geometry and
my normal design rules, I end up with a footprint where very skinny
copper peninsulas sneak between pads when it is placed in a polygon.
The peninsulas neck down to less than the minimum copper width rule.
So, first off, I'm surprised that the Cu polygon allows Cu to pour into
a space less than the minimum width rule.
Secondly, I'm wondering if fab houses might flag that as a DRC violation
even if pcb doesn't.
I have had a fab house complain about this. I ended up telling them to just
ignore it, and so far it hasn't been an issue but it does make me a little nervous,
particularly what might happen to fine slivers that get undercut during etching.
Also, I don't do RF stuff, so ending up with unconnected islands is not a issue for
me.
Also my fab used to complain about them; this is why now I carefully
draw by hand polygons, deleting them and reshamping them when sub-DRC
metals are created.
Post by Lilith Bryant
I have been meaning to write a polygon "bake" tool that fixes this, but have
been put off by the lack of a workable polygon library. Pretty much just needs
to erode then dilate by the minimum clearance. Was going to use shapely/GEOS
in python, but it's erosion doesn't seem to work :( So it went into the too hard
basket for the time being.
--
Hofstadter's Law:
"It always takes longer than you expect, even when you take into account
Hofstadter's Law."
Lilith Bryant
2014-07-06 09:06:30 UTC
Permalink
<snip>
Post by onetmt
Also my fab used to complain about them; this is why now I carefully
draw by hand polygons, deleting them and reshamping them when sub-DRC
metals are created.
Post by Lilith Bryant
I have been meaning to write a polygon "bake" tool that fixes this, but
have
Post by Lilith Bryant
been put off by the lack of a workable polygon library. Pretty much just
needs
Post by Lilith Bryant
to erode then dilate by the minimum clearance. Was going to use
shapely/GEOS
Post by Lilith Bryant
in python, but it's erosion doesn't seem to work :( So it went into the
too hard
Post by Lilith Bryant
basket for the time being.
Sorry to answer my own reply here, but I've just thought of a better way to
do this. If the raw polygon is first built with clearances of (P+L) instead
of just P....

(Where P=poly to line clearance, and L is the min line width)

... then that's the erosion step done right there, so would just need a dilation
after that, and that can be handled by "union-ing" an L*2 width line around the
perimeter(s) of the poly.

Is this within the capabilities of the existing infrastructure?
Peter Clifton
2014-07-06 14:57:19 UTC
Permalink
---
Peter Clifton <***@clifton-electronics.co.uk >

(Sent from my phone)

-------- Original message --------
From: Lilith Bryant <***@gmail.com>
Date:06/07/2014 10:06 (GMT+00:00)
To: geda-***@delorie.com
Subject: Re: [geda-user] pour clearing around pads
<snip>
Post by onetmt
Also my fab used to complain about them; this is why now I carefully
draw by hand polygons, deleting them and reshamping them when sub-DRC
metals are created.
Post by Lilith Bryant
I have been meaning to write a polygon "bake" tool that fixes this, but
have
Post by Lilith Bryant
been put off by the lack of a workable polygon library.   Pretty much just
needs
Post by Lilith Bryant
to erode then dilate by the minimum clearance.    Was going to use
shapely/GEOS
Post by Lilith Bryant
in python, but it's erosion doesn't seem to work :(   So it went into the
too hard
Post by Lilith Bryant
basket for the time being.   
Sorry to answer my own reply here, but I've just thought of a better way to
do this.  If the raw polygon is first built with clearances of (P+L) instead
of just P....

(Where P=poly to line clearance, and L is the min line width)

... then that's the erosion step done right there,  so would just need a dilation
after that, and that can be handled by "union-ing" an L*2 width line around the
perimeter(s) of the poly. 

Is this within the capabilities of the existing infrastructure?


Not exactly, and it may not work when multiple different objects combine to create a thin feature.

Some polygon outlines and hole contours are also explicit, not created from clearances, so the rule could not be applied globally.

Erode and dilate is one way to solve this all, but those operations (erode in particular) are not trivial to implement.
Lilith Bryant
2014-07-07 07:58:34 UTC
Permalink
Post by Peter Clifton
-------- Original message --------
Date:06/07/2014 10:06 (GMT+00:00)
Subject: Re: [geda-user] pour clearing around pads
Post by Lilith Bryant
Sorry to answer my own reply here, but I've just thought of a better way to
do this.  If the raw polygon is first built with clearances of (P+L) instead
of just P....
(Where P=poly to line clearance, and L is the min line width)
... then that's the erosion step done right there,  so would just need a
dilation
after that, and that can be handled by "union-ing" an L*2 width line around the
perimeter(s) of the poly. 
Is this within the capabilities of the existing infrastructure?
Not exactly, and it may not work when multiple different objects combine to
create a thin feature.
Can you describe such a case. I can't see how this could be?
Post by Peter Clifton
Some polygon outlines and hole contours are also explicit, not created from
clearances, so the rule could not be applied globally.
Explicit outlines could be simply cleared by L from the user placed edge.

The final dilation by L (i.e. by the perimeter augmentation method described above),
would then put the edge back in the right place.
Peter Clifton
2014-07-06 14:50:23 UTC
Permalink
If it crashes or produces an incorrect result, send me a test case of you are able to do so. It won't get investigated and fixed otherwise.


The "keep the biggest piece" behaviour is intentional (at the moment)

Pcb's polygon code has been buggy in the past, but seemed to be very stable for a while now after many fixes by Ben and myself.

I've been looking at extending its capabilities recently, and an again seeing problems with that development code, but that is kind of expected at the moment.

One thing which may have affected the numerical stability and certain equality / magnitude checks was the switch to nanometre internal units. This appears to have had an impact on some of the floating point round off and EPSILON tests. 


---
Peter Clifton <***@clifton-electronics.co.uk >

(Sent from my phone)

-------- Original message --------
From: Eric Brombaugh <***@cox.net>
Date:06/07/2014 06:05 (GMT+00:00)
To: geda-***@delorie.com
Subject: Re: [geda-user] pour clearing around pads
I'm working on a footprint where if I follow the data-sheet geometry and my normal design rules, I end up with a footprint where very skinny copper peninsulas sneak between pads when it is placed in a polygon.  The peninsulas neck down to less than the minimum copper width rule.
So, first off, I'm surprised that the Cu polygon allows Cu to pour into a space less than the minimum width rule.
Secondly, I'm wondering if fab houses might flag that as a DRC violation even if pcb doesn't.
This has been a problem with PCB poly fill for as long as I've been using it. I've never had a fab house complain about it, but I have had boards come back with shorts due to the thin copper fingers lifting and drifting around under the mask.

PCB's poly fill algorithm is generally a disaster - it leaves large regions unfilled if there are intervening traces and pads, often crashes the application when used in complex geometries and as seen above results in significant DRC violations. The only way around this is to carefully subdivide the polygons into smaller regions by hand in order to work around the blockages, but that results in designs with dozens if not hundreds of small polygons that become difficult to maintain. Other FOSS pcb design tools seem to handle this function more gracefully - see how KiCAD does it for example.

Having vented, I would note that I'm a fan of PCB and I continue to use it happily despite this wart. I suppose if it bothered me enough I'd make the effort to understand how this works and try to fix it, but there always seems to be something more fun to do.

Eric
DJ Delorie
2014-07-06 17:04:40 UTC
Permalink
Post by Peter Clifton
One thing which may have affected the numerical stability and
certain equality / magnitude checks was the switch to nanometre
internal units. This appears to have had an impact on some of the
floating point round off and EPSILON tests. 
We're not changing the units to Epsilon :-)
DJ Delorie
2014-07-06 17:02:16 UTC
Permalink
Post by Lilith Bryant
Sorry to answer my own reply here, but I've just thought of a better way to
do this. If the raw polygon is first built with clearances of (P+L) instead
of just P....
That just moves the problem to other places where the clearance would
have been P+L+L
Lilith Bryant
2014-07-07 07:52:25 UTC
Permalink
Post by Lilith Bryant
Post by Lilith Bryant
Sorry to answer my own reply here, but I've just thought of a better way
to
Post by Lilith Bryant
do this. If the raw polygon is first built with clearances of (P+L)
instead
Post by Lilith Bryant
of just P....
That just moves the problem to other places where the clearance would
have been P+L+L
I don't think it does.

If the final step (i.e. the "dilate") is union-ing a series of L width "lines"
on the (super-eroded) polygon's perimeter, then no part of it can
possibly end up thinner than L.

How does a "proper" dilation differ from this anyway?
Peter Clifton
2014-07-08 18:17:16 UTC
Permalink
Please don't use zero width pads...

The file format makes no guarantee how they behave, and it seems like the special case free situation should view them as an under width DRC error.

Adding support for defining additional mask within a footprint should not be insurmountable, just needs in the first instance, someone to define the extension to the file format.

Peter


---
Peter Clifton <***@clifton-electronics.co.uk >

(Sent from my phone)

-------- Original message --------
From: DJ Delorie <***@delorie.com>
Date:07/07/2014 19:00 (GMT+00:00)
To: geda-***@delorie.com
Subject: Re: [geda-user] pour clearing around pads
Post by Dave Curtis
I'm not sure where to look for how a zero-thickness pad might cause
phantom shorts or how it interacts with route blocking.  Clues welcome.
The code considers a pad to be a line segment between two points, and
may do a "do these segments intersect" test independent of the "check
the width" test.  djopt, I think, does this - intersection of segments
is a different test than the "happen to touch due to width" test.
Dave Curtis
2014-07-09 06:09:02 UTC
Permalink
Post by Peter Clifton
Please don't use zero width pads...
OK :)
Post by Peter Clifton
The file format makes no guarantee how they behave, and it seems like
the special case free situation should view them as an under width DRC
error.
Yes, its a total belly-flop onto "maybe it happens to work."
Post by Peter Clifton
Adding support for defining additional mask within a footprint should
not be insurmountable, just needs in the first instance, someone to
define the extension to the file format.
Extending the file format is the easy part. I can come up with lots of
ideas for syntax. And I could have a patch for the flex .l-file in
minutes and recognize the the constructs in the bison code quite quickly
as well. It's getting past that point is where we smack into a wall --
it's not clear to me that the internal data structures are ready to
accept copper-clearance and mask-clearance features that are not
associated with a pad or a pin.

A friend across town has been using KiCad for a while, and since we are
interested in building the same sorts of things we try to share what we
can in terms of tools and designs. Right now, we are hot on the "grand,
unified, footprint generator script" problem. We would like to come up
with a single front-end that can create footprints for both pcb and
KiCad so that we could share footprints more easily. So... I've been
looking at the KiCad footprint file format and their new one can handle
a lot of things that are somewhat vexing in pcb -- although I'm not too
hot on the S-expression idea overall. Anyway, KiCad seems to leave some
things out that my friend and I have been talking about -- like keep-outs.

So the point of the above paragraph is, yes, I can suggest some
extensions, and now would be a good time to capture that since I am
trying to wrap my head around the issues right now. What I can do:

1. write up some straw-man spec extensions
2. update the "footprint creation for.." document with what ever settles
out of that.

What I can not do:

Investigate the feasibility of implementing the extensions. I simply
don't know the code well enough.

-dave
Post by Peter Clifton
Peter
---
(Sent from my phone)
-------- Original message --------
From: DJ Delorie
Date:07/07/2014 19:00 (GMT+00:00)
Subject: Re: [geda-user] pour clearing around pads
Post by Dave Curtis
I'm not sure where to look for how a zero-thickness pad might cause
phantom shorts or how it interacts with route blocking. Clues welcome.
The code considers a pad to be a line segment between two points, and
may do a "do these segments intersect" test independent of the "check
the width" test. djopt, I think, does this - intersection of segments
is a different test than the "happen to touch due to width" test.
Peter Stuge
2014-07-09 08:25:14 UTC
Permalink
So the point of the above paragraph is, yes, I can suggest some extensions,
and now would be a good time to capture that since I am trying to wrap my
1. write up some straw-man spec extensions
2. update the "footprint creation for.." document with what ever settles
out of that.
Thanks for that!

I strongly agree with a unified data model effort.

But - in order for a database to become meaningful it needs to
explicitly support all "desired variations" already in use, and
probably have hooks to add new ones. Some examples of such desired
variations are:

* extending smd pads away from packages for easier hand soldering
* different paste aperture margins depending on stencil process
* bumping unplated drill diameters when fab only supports plated holes
* chip vendors having slightly different package specs for same package

and definitely many more.

Each variation needs to be a single check-box in a user interface,
which can be switched on and off at any time. Most variations have
parameters, which also need to be represented in the data model, as
variation profiles, so that users simply select the right fab profile
and done.

Crowdsourcing the data is important, ideally allowing publication
from within the app itself, and certainly allowing
installation/updating of data from within the app itself. Think
Firefox extensions. A similar process needs to be supported for
adding a new fab profile, which might contain parameters for several
standardized desired variations, and possibly add fab-specific
variations.

Developing and maintaining such a data model is no simple nor small
task. I'll help work on it.
Investigate the feasibility of implementing the extensions. I simply
don't know the code well enough.
I don't think any existing code fits a unified database and I think
it's OK to create an ambitious project to change that in the medium
to long term. The data model is the first step though, code comes
(much?) later.


Of course this is all no solution for the immediate term, but an
investment in open source tooling that might only really pay off
in a decade or three.


//Peter
Bert Timmerman
2014-07-09 17:04:51 UTC
Permalink
Post by Dave Curtis
A friend across town has been using KiCad for a while, and since we
are interested in building the same sorts of things we try to share
what we can in terms of tools and designs. Right now, we are hot on
the "grand, unified, footprint generator script" problem. We would
like to come up with a single front-end that can create footprints for
both pcb and KiCad so that we could share footprints more easily.
So... I've been looking at the KiCad footprint file format and their
new one can handle a lot of things that are somewhat vexing in pcb --
although I'm not too hot on the S-expression idea overall. Anyway,
KiCad seems to leave some things out that my friend and I have been
talking about -- like keep-outs.
So the point of the above paragraph is, yes, I can suggest some
extensions, and now would be a good time to capture that since I am
1. write up some straw-man spec extensions
2. update the "footprint creation for.." document with what ever
settles out of that.
Investigate the feasibility of implementing the extensions. I simply
don't know the code well enough.
-dave
Hi Dave,

Please have a look at:

https://github.com/bert/fped

It was a parametric footprint editor just for generating KiCad footprint
libraries until I added code for generating pcb footprints.

I got stuck at the point of separating the individual footprints for pcb
(for KiCad a library of concatenated footprints is generated).

Please give it a test drive, patches are welcome :)


Kind regards,

Bert Timmerman.
Dave Curtis
2014-07-09 21:45:24 UTC
Permalink
Post by Bert Timmerman
Post by Dave Curtis
A friend across town has been using KiCad for a while, and since we
are interested in building the same sorts of things we try to share
what we can in terms of tools and designs. Right now, we are hot on
the "grand, unified, footprint generator script" problem. We would
like to come up with a single front-end that can create footprints
for both pcb and KiCad so that we could share footprints more
easily. So... I've been looking at the KiCad footprint file format
and their new one can handle a lot of things that are somewhat vexing
in pcb -- although I'm not too hot on the S-expression idea overall.
Anyway, KiCad seems to leave some things out that my friend and I
have been talking about -- like keep-outs.
So the point of the above paragraph is, yes, I can suggest some
extensions, and now would be a good time to capture that since I am
1. write up some straw-man spec extensions
2. update the "footprint creation for.." document with what ever
settles out of that.
Investigate the feasibility of implementing the extensions. I simply
don't know the code well enough.
-dave
Hi Dave,
https://github.com/bert/fped
It was a parametric footprint editor just for generating KiCad
footprint libraries until I added code for generating pcb footprints.
I got stuck at the point of separating the individual footprints for
pcb (for KiCad a library of concatenated footprints is generated).
Not with the new format.
Post by Bert Timmerman
Please give it a test drive, patches are welcome :)
I'll take a look at it.
Post by Bert Timmerman
Kind regards,
Bert Timmerman.
Peter Clifton
2014-07-09 01:32:53 UTC
Permalink
Post by Lilith Bryant
Post by Peter Clifton
Not exactly, and it may not work when multiple different objects combine to
create a thin feature.
Can you describe such a case. I can't see how this could be?
For mask cutouts, what I meant was that it would only work for global
application to the entire board at one time. I don't think it would work
nicely with our usual incremental approach to polygon management.

For application to general polygons - there are some fundamental
problems. One, in particular, is if the outer contour is explicitly
defined such that it violates a min-width rule in a necked-down region.

If you shrink this polygon (by subtracting a line for each edge of its
contour), it will breaks into two pieces either side of the necked
region. PCB's connectivity checking code would not cope with that, the
only way it would cope with the current code-base if you deliberately
threw away one of the pieces.

Bloating a convex part of the polygon (after shrinking) will also
introduce rounded corners, not necessarily an issue - but also not
necessarily what people might want, or expect. Certainly it would be a
break from what we currently expect.


The method could be used to remove sharp corners from spikes on
polygons, but something "feels" wrong with insisting on radiusing say, a
90 degree corner of a power plane, just because it happened to go
through this code-path. This is one of the reasons I never did anything
with that idea when we looked at it before.


A long ago, I had a git branch which addressed this "one piece kept"
issue, allowing one "pour" once disected by tracking, to contribute to
connectivity of any number of different nets. The branch had code to
allow keeping every piece of a poured polygon, and optionally -
introduced island removal logic to only keep those pieces which were
electrically connected to something. I'll perhaps revive the branch one
day.

For reference, I used the word "pour" in the branch to indicate the
"correct", non-piece-throwing-away behaviour, in contrast to PCB's
current "polygons".
--
Peter Clifton <peter.clifton-j0HF+osULJQMjHSeoOxd2MuBeof9RJB+Wmv/***@public.gmane.org>

Clifton Electronics
Lilith Bryant
2014-07-09 01:59:39 UTC
Permalink
Post by Peter Clifton
Bloating a convex part of the polygon (after shrinking) will also
introduce rounded corners, not necessarily an issue - but also not
necessarily what people might want, or expect. Certainly it would be a
break from what we currently expect.
Ah, yes sorry. I didn't consider the corner rounding effect.

But, it could be argued that ANY sharp corner is a DRC violation.
i.e. if you measure across the corner diagonally, you've got a measurement less
that the min copper width.

i.e. if a 90 degree corner is ok, and a 5 degree "spike" corner certainly isn't, then
where's the line in the sand? <90 is bad, >=90 is ok? Or maybe 85 degrees to
cope with not-quite square polygons????

I don't think it's the end of the road for this idea, explicit corners could be
re-squared off, but I accept that this is getting complex fast.

I'm not bothered about corner rounding personally, but I can see why it would be
a problem for some uses. So I will probably be writing a python tool for myself
(and anyone else) to bake'n'round polygons. Now that I've figured out how it could
be done. Clearly just too many issues to have it in the core.
Peter Clifton
2014-07-09 12:50:00 UTC
Permalink
Apologies for my phone related top posting.

I've been poking at pcb's internals for a long time now, and suspect I could (given time) fill in the geometry / rendering / connectivity parts of any file format extension. I just don't have a lot of time to drive the whole thing from design, documentation, discussion, testing etc..

Implementing editing support is often the hardest part of enabling a new feature. (Think gschem's paths - I never did finish support for creating those within gschem, only moving existing control points.)

I think any layer objects embeded in footprints (might as well include silk and copper in the same way  going forward), ought to reference predefined symbolic layer name or ID.

"TOP-SILK" "TOP-MASK" "TOP-COPPER" "INNER-COPPER" "INNER-ANTI-COPPER"* "BOTTOM-COPPER" etc...

*(inner anticopper might need some thought, possibly not one for today!).


These would no doubt be interpreted relative to the placement side. Names, and rules for these would need defining.  Ideally, think how this applies to complex constructions like flex rigid and buried components. (If it is easy to design so we support / avoid breaking such flows, we should).


I'd be tempted to make pads layer objects going forward, (and let them reside on any copper layer), but perhaps for now, keeping them "special" and outside the normal layer data structure  may be cleaner due to the  fact they place objects on multiple "layers". (Copper, mask, paste etc...)

If there is need to define geometric areas (keepouts etc.), this should ideally be done with a primitive polycurve definition, not a collection of other objects. IDF supports line and circular arc segments for this, and I've been working on support for similar within PCBs polygons. You might need to consider how multiple outlines are handled.

I've been tempted to suggest a polycurve representation for pads too. Might want to indirect through a pad and/or padstack definition to avoid huge amounts of data repetition, and to allow the exporters to more easily group features into unique apertures etc.

If you have a link to hand, send me a pointer to the KiCAD footprint format reference please.

---
Peter Clifton <***@clifton-electronics.co.uk >

(Sent from my phone)

-------- Original message --------
From: Dave Curtis <***@sonic.net>
Date:09/07/2014 07:09 (GMT+00:00)
To: geda-***@delorie.com
Subject: Re: [geda-user] pour clearing around pads

On 07/08/2014 11:17 AM, Peter Clifton wrote:
Please don't use zero width pads...
OK :)

The file format makes no guarantee how they behave, and it seems like the special case free situation should view them as an under width DRC error.
Yes, its a total belly-flop onto "maybe it happens to work."

Adding support for defining additional mask within a footprint should not be insurmountable, just needs in the first instance, someone to define the extension to the file format.
Extending the file format is the easy part.  I can come up with lots of ideas for syntax. And I could have a patch for the flex .l-file in minutes and recognize the the constructs in the bison code quite quickly as well.  It's getting past that point is where we smack into a wall -- it's not clear to me that the internal data structures are ready to accept copper-clearance and mask-clearance features that are not associated with a pad or a pin.

A friend across town has been using KiCad for a while, and since we are interested in building the same sorts of things we try to share what we can in terms of tools and designs.  Right now, we are hot on the "grand, unified, footprint generator script" problem.   We would like to come up with a single front-end that can create footprints for both pcb and KiCad so that we could share footprints more easily.  So... I've been looking at the KiCad footprint file format and their new one can handle a lot of things that are somewhat vexing in pcb -- although I'm not too hot on the S-expression idea overall.  Anyway, KiCad seems to leave some things out that my friend and I have been talking about -- like keep-outs. 

So the point of the above paragraph is, yes, I can suggest some extensions, and now would be a good time to capture that since I am trying to wrap my head around the issues right now.  What I can do:

1. write up some straw-man spec extensions
2. update the "footprint creation for.." document with what ever settles out of that.

What I can not do:

Investigate the feasibility of implementing the extensions.  I simply don't know the code well enough.

-dave


Peter


---
Peter Clifton <***@clifton-electronics.co.uk >

(Sent from my phone)

-------- Original message --------
From: DJ Delorie
Date:07/07/2014 19:00 (GMT+00:00)
To: geda-***@delorie.com
Subject: Re: [geda-user] pour clearing around pads
Post by Dave Curtis
I'm not sure where to look for how a zero-thickness pad might cause
phantom shorts or how it interacts with route blocking.  Clues welcome.
The code considers a pad to be a line segment between two points, and
may do a "do these segments intersect" test independent of the "check
the width" test.  djopt, I think, does this - intersection of segments
is a different test than the "happen to touch due to width" test.
Dave Curtis
2014-07-09 16:10:41 UTC
Permalink
Post by Peter Clifton
Apologies for my phone related top posting.
I've been poking at pcb's internals for a long time now, and suspect I
could (given time) fill in the geometry / rendering / connectivity
parts of any file format extension.
Ohh... a vict^h^h^h^h volunteer...
Post by Peter Clifton
I just don't have a lot of time to drive the whole thing from design,
documentation, discussion, testing etc..
Implementing editing support is often the hardest part of enabling a
new feature. (Think gschem's paths - I never did finish support for
creating those within gschem, only moving existing control points.)
I think any layer objects embeded in footprints (might as well include
silk and copper in the same way going forward), ought to reference
predefined symbolic layer name or ID.
"TOP-SILK" "TOP-MASK" "TOP-COPPER" "INNER-COPPER" "INNER-ANTI-COPPER"*
"BOTTOM-COPPER" etc...
*(inner anticopper might need some thought, possibly not one for today!).
Seems like you are thinking along the same lines I am. Geometry *
layer-flags.
Post by Peter Clifton
These would no doubt be interpreted relative to the placement side.
Names, and rules for these would need defining. Ideally, think how
this applies to complex constructions like flex rigid and buried
components. (If it is easy to design so we support / avoid breaking
such flows, we should).
I'd be tempted to make pads layer objects going forward, (and let them
reside on any copper layer), but perhaps for now, keeping them
"special" and outside the normal layer data structure may be cleaner
due to the fact they place objects on multiple "layers". (Copper,
mask, paste etc...)
If there is need to define geometric areas (keepouts etc.), this
should ideally be done with a primitive polycurve definition, not a
collection of other objects. IDF supports line and circular arc
segments for this, and I've been working on support for similar within
PCBs polygons. You might need to consider how multiple outlines are
handled.
Some kind of polycurve geometry would be very flexible. I have no idea
how difficult that would be to handle in the internal data structures,
what impact that has on performance, how much that complicates
rotations, etc. etc.
Post by Peter Clifton
I've been tempted to suggest a polycurve representation for pads too.
Might want to indirect through a pad and/or padstack definition to
avoid huge amounts of data repetition, and to allow the exporters to
more easily group features into unique apertures etc.
I'll work up a few ideas and write them up to kick things off. First
off, I want to collect a laundry list of "cranky footprint problems" so
that I have something to validate the spec against. I have a couple of
my own poster-child footprints for that, but my PCB designs never get
extremely complicated and I don't target automated assembly, so there
are a lot of issues I'm not aware of.
Post by Peter Clifton
If you have a link to hand, send me a pointer to the KiCAD footprint
format reference please.
This page:
http://www.kicad-pcb.org/display/KICAD/File+Formats
has a link to a pdf document. Note that KiCad is moving to a new
format. The old format has a single monolithic text file that contains
all footprints -- I haven't looked at that format at all, but I assume
it has been found to be insufficiently flexible. The new format is
based on S-expressions. Seems flexible, but I think they leave things
out (like keep-outs) or perhaps I'm not seeing how to use what is
there. Could be a case of creeping elegance, or maybe they have the
right idea. I haven't studied it enough to say.

-dave
Post by Peter Clifton
---
(Sent from my phone)
-------- Original message --------
From: Dave Curtis
Date:09/07/2014 07:09 (GMT+00:00)
Subject: Re: [geda-user] pour clearing around pads
Post by Peter Clifton
Please don't use zero width pads...
OK :)
Post by Peter Clifton
The file format makes no guarantee how they behave, and it seems like
the special case free situation should view them as an under width
DRC error.
Yes, its a total belly-flop onto "maybe it happens to work."
Post by Peter Clifton
Adding support for defining additional mask within a footprint should
not be insurmountable, just needs in the first instance, someone to
define the extension to the file format.
Extending the file format is the easy part. I can come up with lots
of ideas for syntax. And I could have a patch for the flex .l-file in
minutes and recognize the the constructs in the bison code quite
quickly as well. It's getting past that point is where we smack into
a wall -- it's not clear to me that the internal data structures are
ready to accept copper-clearance and mask-clearance features that are
not associated with a pad or a pin.
A friend across town has been using KiCad for a while, and since we
are interested in building the same sorts of things we try to share
what we can in terms of tools and designs. Right now, we are hot on
the "grand, unified, footprint generator script" problem. We would
like to come up with a single front-end that can create footprints for
both pcb and KiCad so that we could share footprints more easily.
So... I've been looking at the KiCad footprint file format and their
new one can handle a lot of things that are somewhat vexing in pcb --
although I'm not too hot on the S-expression idea overall. Anyway,
KiCad seems to leave some things out that my friend and I have been
talking about -- like keep-outs.
So the point of the above paragraph is, yes, I can suggest some
extensions, and now would be a good time to capture that since I am
1. write up some straw-man spec extensions
2. update the "footprint creation for.." document with what ever settles out of that.
Investigate the feasibility of implementing the extensions. I simply
don't know the code well enough.
-dave
Post by Peter Clifton
Peter
---
(Sent from my phone)
-------- Original message --------
From: DJ Delorie
Date:07/07/2014 19:00 (GMT+00:00)
Subject: Re: [geda-user] pour clearing around pads
Post by Dave Curtis
I'm not sure where to look for how a zero-thickness pad might cause
phantom shorts or how it interacts with route blocking. Clues welcome.
The code considers a pad to be a line segment between two points, and
may do a "do these segments intersect" test independent of the "check
the width" test. djopt, I think, does this - intersection of segments
is a different test than the "happen to touch due to width" test.
DJ Delorie
2014-07-09 17:50:41 UTC
Permalink
I think any layer objects embeded in footprints (might as well include silk and copper in the same way  going forward), ought to reference predefined symbolic layer name or ID.
"TOP-SILK" "TOP-MASK" "TOP-COPPER" "INNER-COPPER" "INNER-ANTI-COPPER"* "BOTTOM-COPPER" etc...
*(inner anticopper might need some thought, possibly not one for today!).
Yeah, IMHO symbolic layers is a must.

I also think we need a way of "stacking" or "nesting" drawing layers
within a physical layer to do fill/cut/draw operations. For example:

* "Fill" - positive, first rendered, used for power plane polygons
* "Cut" - used for keep-outs, and cutting planes into sub-planes with traces
* "Trace" - used to draw traces over polygons (clear polygons but ignore cuts)

Each layer needs a positive/negative flag, so you could (for example)
draw negative text over a filled rectangle.

But given that footprints might have their own fill/cut/trace layers,
which may be drawn on top of the board-layer cuts, we need to be
flexible in making these stacks...

* board-level fill
* board-level cut
* footprint-level fill
* footprint-level cut
* traces

but if you want to support "sub-layouts" it gets even more complex.

Perhaps a heirarchical design?

* board-level fill
* board-level cut
* sub-layouts and footprints ->
* . . .
* . . .
* . . .
* board-level traces

And all that is just *per layer*
Dave Curtis
2014-07-09 22:04:42 UTC
Permalink
Post by DJ Delorie
I think any layer objects embeded in footprints (might as well include silk and copper in the same way going forward), ought to reference predefined symbolic layer name or ID.
"TOP-SILK" "TOP-MASK" "TOP-COPPER" "INNER-COPPER" "INNER-ANTI-COPPER"* "BOTTOM-COPPER" etc...
*(inner anticopper might need some thought, possibly not one for today!).
Yeah, IMHO symbolic layers is a must.
I also think we need a way of "stacking" or "nesting" drawing layers
* "Fill" - positive, first rendered, used for power plane polygons
* "Cut" - used for keep-outs, and cutting planes into sub-planes with traces
* "Trace" - used to draw traces over polygons (clear polygons but ignore cuts)
Each layer needs a positive/negative flag, so you could (for example)
draw negative text over a filled rectangle.
But given that footprints might have their own fill/cut/trace layers,
which may be drawn on top of the board-layer cuts, we need to be
flexible in making these stacks...
* board-level fill
* board-level cut
* footprint-level fill
* footprint-level cut
* traces
I agree with all that. The need for a footprint-level version of the
layers distinct from the lay-out level isn't extremely clear for me, but
I assume that makes internal operations easier to sort out.
Post by DJ Delorie
but if you want to support "sub-layouts" it gets even more complex.
Perhaps a heirarchical design?
* board-level fill
* board-level cut
* sub-layouts and footprints ->
* . . .
* . . .
* . . .
* board-level traces
And all that is just *per layer*
And what are the chances of this happening in pcb? Is that a doable change?

So here is another one. I saw my KiCad-using friend (at the "Wednesday
Robot lunch" where we talk about robots and eat Thai food...) the topic
of buried parts in multi-layer boards came up. So that got me thinking
about how to represent voids. The void is a cut into the substrate of
the lamination(s) above the component. It seems to me this is another
footprint layer, with a Z-thickness, that causes voids in adjacent
layers depending on the particular stacking order and layer thickness.
I don't plan to build any of these any time soon... but it seems like
the concept should be considered along with all the rest.
DJ Delorie
2014-07-09 22:23:00 UTC
Permalink
Post by Dave Curtis
Post by DJ Delorie
* board-level fill
* board-level cut
* footprint-level fill
* footprint-level cut
* traces
I agree with all that. The need for a footprint-level version of the
layers distinct from the lay-out level isn't extremely clear for me, but
I assume that makes internal operations easier to sort out.
It doesn't need to be footprint-vs-layout, it's just that there needs
to be rules about how to apply layered patterns when recursively
defining layouts in terms of sub-layouts and/or footprints. I.e.
the above list is, internally, simply:

* positive, polygons, lines, etc
* negative, polygons, lines, etc
* sub-part (from footprint, sub-layout, whatever)
* positive, etc
* negative, etc
* positive, etc
* positive, etc
Post by Dave Curtis
And what are the chances of this happening in pcb? Is that a doable change?
Someone would have to have the drive and time to do it. I think it's
a fundamental change to the pcb internals, which means it touches most
of the code.
Post by Dave Curtis
So here is another one. I saw my KiCad-using friend (at the "Wednesday
In some EDA packages, the footprint includes some "skyscraper"
information that gives a rough idea of space needed in 3-D terms.
Sometimes that info can be used to do a 3-D model of the layout,
sometimes that info is used as a 3-D "keep out". We could define a
type of that data that means "remove board here" but I'm not sure how
well that would relate to the rest of PCB.

I mean, lots of places in pcb need to know the "shape of the board".
Now we're adding "shape of the board from layer X to layer Y" etc.
How much of this shape will be dynamically computed by scanning the
layout? How much will be pre-computed and cached? How will this
affect performance or code readability? I don't know the answers to
those questions.
Dave Curtis
2014-07-09 23:18:18 UTC
Permalink
Post by DJ Delorie
Post by Dave Curtis
And what are the chances of this happening in pcb? Is that a doable change?
Someone would have to have the drive and time to do it. I think it's
a fundamental change to the pcb internals, which means it touches most
of the code.
Seems like this is intertwined with a road-map discussion. It comes
back to incremental change versus "boil the ocean." I don't have a lot
of visibility into what ideas cross over from incremental to
infrastructure. OTOH, maybe its good that I don't have that coloring my
thought process.

Given a "future footprint" specification perhaps there is a way to pull
incremental changes out of it that are easy to implement in the current
engine, and then as the infrastructure comes on line more pieces of it
can be enabled. Or, maybe there is "future footprint" and "transition
footprint" format.
Dave Curtis
2014-07-10 03:54:36 UTC
Permalink
Post by DJ Delorie
I think any layer objects embeded in footprints (might as well include silk and copper in the same way going forward), ought to reference predefined symbolic layer name or ID.
"TOP-SILK" "TOP-MASK" "TOP-COPPER" "INNER-COPPER" "INNER-ANTI-COPPER"* "BOTTOM-COPPER" etc...
*(inner anticopper might need some thought, possibly not one for today!).
Yeah, IMHO symbolic layers is a must.
I also think we need a way of "stacking" or "nesting" drawing layers
* "Fill" - positive, first rendered, used for power plane polygons
* "Cut" - used for keep-outs, and cutting planes into sub-planes with traces
* "Trace" - used to draw traces over polygons (clear polygons but ignore cuts)
Each layer needs a positive/negative flag, so you could (for example)
draw negative text over a filled rectangle.
But given that footprints might have their own fill/cut/trace layers,
which may be drawn on top of the board-layer cuts, we need to be
flexible in making these stacks...
* board-level fill
* board-level cut
* footprint-level fill
* footprint-level cut
* traces
but if you want to support "sub-layouts" it gets even more complex.
Perhaps a heirarchical design?
* board-level fill
* board-level cut
* sub-layouts and footprints ->
* . . .
* . . .
* . . .
* board-level traces
And all that is just *per layer*
It strikes me that this is an excellent layer model for talking about
footprint models, even though pcb doesn't support arbitrary symbolic
layers. If we define a model consisting of symbolic logic layers, each
containing sublayers, and define rendering rules, then we can map what
pcb *currently* does onto that same model.

So while ultimately it would be great if users could create arbitrary
layer stacks with arbitrary names, and define their rendering order,
right now today pcb *has* a set of logical layers, with defined names,
and a defined rendering order. That means one could go ahead and define
a footprint model around some future flexible layer model. The
footprint file format can target the flexible layer model, its just that
if you code a footprint today there is exactly one layer model available
to you, with pre-defined layer names, so your footprint needs to live
within those constraints. As pcb moves forward, more of the semantics
of the footprint model become available as layer constraints are relaxed.
DJ Delorie
2014-07-10 04:49:57 UTC
Permalink
Post by Dave Curtis
It strikes me that this is an excellent layer model for talking about
footprint models, even though pcb doesn't support arbitrary symbolic
layers. If we define a model consisting of symbolic logic layers, each
containing sublayers, and define rendering rules, then we can map what
pcb *currently* does onto that same model.
So while ultimately it would be great if users could create arbitrary
layer stacks with arbitrary names, and define their rendering order,
right now today pcb *has* a set of logical layers, with defined names,
and a defined rendering order. That means one could go ahead and define
a footprint model around some future flexible layer model. The
footprint file format can target the flexible layer model, its just that
if you code a footprint today there is exactly one layer model available
to you, with pre-defined layer names, so your footprint needs to live
within those constraints. As pcb moves forward, more of the semantics
of the footprint model become available as layer constraints are relaxed.
I suppose we could start with a new footprint *import* format, and
support a subset of it, with the expectation that we'd support more of
it later. I'd want to consider it "experimental" in case we end up
*not* supporting it later, or supporting something different.

Bonus if we could convert that format to Kicad too :-)
Dave Curtis
2014-07-10 05:14:10 UTC
Permalink
Post by DJ Delorie
I suppose we could start with a new footprint *import* format, and
support a subset of it, with the expectation that we'd support more of
it later. I'd want to consider it "experimental" in case we end up
*not* supporting it later, or supporting something different.
Yes, exactly. I suspect that there are things that could be done to
enable more functionality in pcb footprints that don't require massive
infrastructure updates in pcb. With the front-end file reader in place,
then it makes it a lot easier to move forward with pcb, both
incrementally for the small things, and with a clear road map for the
big things.
John Griessen
2014-07-16 02:05:08 UTC
Permalink
Post by Peter Clifton
I'd be tempted to make pads layer objects going forward, (and let them reside on any copper layer), but perhaps for now, keeping
them "special" and outside the normal layer data structure may be cleaner due to the fact they place objects on multiple
"layers". (Copper, mask, paste etc...)
There's not much limit in computing language or data structures today,
so it will give the best return on time spent to model
physical reality well, rather than use "special" layers,
special cases, etc, that must logically combine to
match reality. In other words, have pads be defined per
physical layer and affect things that are 3D physically near them
and nothing else. Making things self consistent like an e field
is ultimately simpler than a giant text sort kinda giving the right answer.
And you can use assertions on the local volume of material to keep errors in check. Otherwise,
you can have "action at a distance" spaghetti code effects.

Of course a physical material and 3D space model means a lot of
PCB and gschem redesign, but... why waste any life hours of any
developers on dead ends?

Incremental change is the only way we've seen work in FOSS, so any move towards such goals
that can be incremental is what to "wrack your brain" for...

On 07/09/2014 07:58 AM, Peter Clifton wrote:> I've personally never had a board vendor want changes in gerber data to accommodate
manufacturing processes. This is generally
Post by Peter Clifton
something they can do themselves using their CAM software if they need.
But, that drops the responsibility of repeatable successful fabbings on the fabber, when
one should keep it him/herself, or maybe have a lot of waste and loss.
Evan Foss
2014-07-23 06:18:59 UTC
Permalink
Sorry about bumping an some what aged thread. I understand the desire
to add 3D functionality, the idea of a utility to import outside
formats makes a lot of sense.

John thing that worries me is the alteration of gschem. Other than
adding another label for marking a 3D model along with the footprint
what alteration is really needed from gschem?
Post by John Griessen
Post by Peter Clifton
I'd be tempted to make pads layer objects going forward, (and let them
reside on any copper layer), but perhaps for now, keeping
them "special" and outside the normal layer data structure may be cleaner
due to the fact they place objects on multiple
"layers". (Copper, mask, paste etc...)
There's not much limit in computing language or data structures today,
so it will give the best return on time spent to model
physical reality well, rather than use "special" layers,
special cases, etc, that must logically combine to
match reality. In other words, have pads be defined per
physical layer and affect things that are 3D physically near them
and nothing else. Making things self consistent like an e field
is ultimately simpler than a giant text sort kinda giving the right answer.
And you can use assertions on the local volume of material to keep errors in
check. Otherwise,
you can have "action at a distance" spaghetti code effects.
Of course a physical material and 3D space model means a lot of
PCB and gschem redesign, but... why waste any life hours of any
developers on dead ends?
Incremental change is the only way we've seen work in FOSS, so any move towards such goals
that can be incremental is what to "wrack your brain" for...
On 07/09/2014 07:58 AM, Peter Clifton wrote:> I've personally never had a
board vendor want changes in gerber data to accommodate manufacturing
processes. This is generally
Post by Peter Clifton
something they can do themselves using their CAM software if they need.
But, that drops the responsibility of repeatable successful fabbings on the fabber, when
one should keep it him/herself, or maybe have a lot of waste and loss.
--
Home
http://evanfoss.googlepages.com/
Work
http://forge.abcd.harvard.edu/gf/project/epl_engineering/wiki/
John Griessen
2014-07-23 14:34:02 UTC
Permalink
Post by Evan Foss
John thing that worries me is the alteration of gschem. Other than
adding another label for marking a 3D model along with the footprint
what alteration is really needed from gschem?
Some way for it to handle symbols just as it handles subschematics
the way verilog or verilog-ams does. Then you have hierarchy
with the ability to reuse modules even if they have the same name.
It's a huge change. Not likely to happen at all unless a need is perceived.

The kinds of reasons for this would be using gschem in chip design.
Next would be for large scale planar circuits of printed electronics
where you are using verilog and verilog-ams to model the lowlevel function
of layout cells that can be repeated hundreds of times as part of a
circuit. For when we can layout printed resistors, caps, diodes, transistors,
inductors -- not just wire -- and fabbed for cheap.

Could take a while.
Evan Foss
2014-07-23 18:46:45 UTC
Permalink
Having never done chip design I could be wrong in the following line
of thought but I want to understand this. That still sounds more like
a netlisting issue to me. I should think the subcircuit really just
needs some special tag on it's page that will indicate the Rx, Cx and
other reference designators will be altered after netlisting &
tessellation on the layout.
Post by John Griessen
Post by Evan Foss
John thing that worries me is the alteration of gschem. Other than
adding another label for marking a 3D model along with the footprint
what alteration is really needed from gschem?
Some way for it to handle symbols just as it handles subschematics
the way verilog or verilog-ams does. Then you have hierarchy
with the ability to reuse modules even if they have the same name.
It's a huge change. Not likely to happen at all unless a need is perceived.
The kinds of reasons for this would be using gschem in chip design.
Next would be for large scale planar circuits of printed electronics
where you are using verilog and verilog-ams to model the lowlevel function
of layout cells that can be repeated hundreds of times as part of a
circuit. For when we can layout printed resistors, caps, diodes, transistors,
inductors -- not just wire -- and fabbed for cheap.
Could take a while.
--
Home
http://evanfoss.googlepages.com/
Work
http://forge.abcd.harvard.edu/gf/project/epl_engineering/wiki/
John Griessen
2014-07-23 20:35:11 UTC
Permalink
Post by Evan Foss
I should think the subcircuit really just
needs some special tag on it's page that will indicate the Rx, Cx and
other reference designators will be altered after netlisting &
tessellation on the layout.
In chips or big circuits with repeated elements the burden would be huge.
Flat netlisting would grind to a halt for circuits with 40 elements repeated 64 times
in one area, and 50 other cases of the same in the whole circuit. That example
is 128K circuit elements used to make shift registers, and not even estimating RAM
and ROM to use.
But printed electronics that goes on the door of a washing machine could get that way easily.
There would be plenty of ROM to encode your programs, and some RAM to run out of,
all made in a slow large CMOS type of organic or nano inorganic semiconductor set of materials
printed on the surface of the door panel so it can dissipate heat and have low low leakage
and low low power consumption, and slow boring performance to go with that. But fine for
motor control and a UI, and there would be a few chips added to transition to the faster networking
circuit world around that slow washing machine bot. The circuit area would look like
all wires and flat patches of capacitance and transistors with no 3D components anywhere except a few
at the edge with an ethernet cable or wireless going out, or at the edge with the motor control
where there are a few special power transistors attached..


Think about making a 64 bit ALU...it would get super tedious to have unique ref des's for
all those transistors. You really just need to know they come from "such and such"
module that has been simulated plenty and can be used with up to so much length of wires away.
Each placement of a module does not need its own unique identifier for module's R1, R2, C1.
They can all be called R1, R2, C1, and exist in different instances of the identical
module, where the instances are kept track of in the netlist and by gschem.


There would be no components needing a ref des anyway in an array repetitive circuit case, since
the circuit elements would be completed from primitives like in chip
mask making, not by an assembly step needing a ref des.
Evan Foss
2014-07-23 21:32:42 UTC
Permalink
Ok. I figured array parts would be more like R01.001 or something but
if you do away with refdes entirely for parts in the array life gets a
lot easier. That said it still feels like an issue for post gschem
stuff.
Post by John Griessen
Post by Evan Foss
I should think the subcircuit really just
needs some special tag on it's page that will indicate the Rx, Cx and
other reference designators will be altered after netlisting &
tessellation on the layout.
In chips or big circuits with repeated elements the burden would be huge.
Flat netlisting would grind to a halt for circuits with 40 elements repeated 64 times
in one area, and 50 other cases of the same in the whole circuit. That example
is 128K circuit elements used to make shift registers, and not even estimating RAM
and ROM to use.
But printed electronics that goes on the door of a washing machine could get
that way easily.
There would be plenty of ROM to encode your programs, and some RAM to run out of,
all made in a slow large CMOS type of organic or nano inorganic
semiconductor set of materials
printed on the surface of the door panel so it can dissipate heat and have low low leakage
and low low power consumption, and slow boring performance to go with that.
But fine for
motor control and a UI, and there would be a few chips added to transition
to the faster networking
circuit world around that slow washing machine bot. The circuit area would look like
all wires and flat patches of capacitance and transistors with no 3D
components anywhere except a few
at the edge with an ethernet cable or wireless going out, or at the edge
with the motor control
where there are a few special power transistors attached..
Think about making a 64 bit ALU...it would get super tedious to have unique ref des's for
all those transistors. You really just need to know they come from "such and such"
module that has been simulated plenty and can be used with up to so much
length of wires away.
Each placement of a module does not need its own unique identifier for module's R1, R2, C1.
They can all be called R1, R2, C1, and exist in different instances of the identical
module, where the instances are kept track of in the netlist and by gschem.
There would be no components needing a ref des anyway in an array repetitive
circuit case, since
the circuit elements would be completed from primitives like in chip
mask making, not by an assembly step needing a ref des.
--
Home
http://evanfoss.googlepages.com/
Work
http://forge.abcd.harvard.edu/gf/project/epl_engineering/wiki/
John Griessen
2014-07-24 22:09:07 UTC
Permalink
Post by Evan Foss
I figured array parts would be more like R01.001 or something but
if you do away with refdes entirely for parts in the array life gets a
lot easier. That said it still feels like an issue for post gschem
stuff.
I'm not sure what you mean. Are you meaning netlist post processing?
Can't see that being effective...gschem is a visual tool, for the
parts that benefit from visual representation rather than just code chunks.
That usually means a wired module around a bunch of code chunks.


On 07/23/2014 04:50 PM, Dave Curtis wrote:> It appeared on plotted (micro-fiche vector art) schematic prints as a single block,
with a "36" displayed by breaking the bottom
Post by Evan Foss
+----+
| |
| +--OUT[0-31;P0-P3]
| |
+-36-+
sorta like that, only prettier.
Yes, that's still the format for the chip design tools as of 13 years ago in 2001 when I last looked.
You *WANT* repeated elements that are identical -- so you can simulate enough of it to be sure.

On 07/23/2014 04:50 PM, Dave Curtis wrote:> So you could have one schematic, and experiment with different versions of the
attached attributes very easily (which was
Post by Evan Foss
important for trying various place/route strategies in that day and age.)
Yes, that's still the aim in logic that has speed and to avoid race conditions. Have one schematic and
swap out pieces of layout to go with it, re-simulate to see if parasitics change the deal, rinse and repeat, etc...

All of this could become something to do again as cheap printable electronic materials come of age.
There are some automaton programs that can do sorta-kinda layout, but it still needs a junior engineer
on energy drinks to drive the simulate/respin/auto-route wash cycle of the high dollar tools.
And there are some integrations that will never be done in silicon because they'd never sell enough.
That washing machine door circuit to do almost everything the machine needs is an example. There won't
ever be a silicon planar fabbed chip for that since it is too small a market niche, and has power handling
and info handling in one.

Dave Curtis
2014-07-23 21:50:14 UTC
Permalink
Post by John Griessen
Post by Evan Foss
I should think the subcircuit really just
needs some special tag on it's page that will indicate the Rx, Cx and
other reference designators will be altered after netlisting &
tessellation on the layout.
In chips or big circuits with repeated elements the burden would be huge.
Flat netlisting would grind to a halt for circuits with 40 elements repeated 64 times
in one area, and 50 other cases of the same in the whole circuit. That example
is 128K circuit elements used to make shift registers, and not even estimating RAM
and ROM to use.
But printed electronics that goes on the door of a washing machine could
get that way easily.
There would be plenty of ROM to encode your programs, and some RAM to run out of,
all made in a slow large CMOS type of organic or nano inorganic
semiconductor set of materials
printed on the surface of the door panel so it can dissipate heat and have low low leakage
and low low power consumption, and slow boring performance to go with that. But fine for
motor control and a UI, and there would be a few chips added to
transition to the faster networking
circuit world around that slow washing machine bot. The circuit area would look like
all wires and flat patches of capacitance and transistors with no 3D
components anywhere except a few
at the edge with an ethernet cable or wireless going out, or at the edge
with the motor control
where there are a few special power transistors attached..
Think about making a 64 bit ALU...it would get super tedious to have unique ref des's for
all those transistors. You really just need to know they come from "such and such"
module that has been simulated plenty and can be used with up to so much
length of wires away.
Each placement of a module does not need its own unique identifier for module's R1, R2, C1.
They can all be called R1, R2, C1, and exist in different instances of the identical
module, where the instances are kept track of in the netlist and by gschem.
There would be no components needing a ref des anyway in an array
repetitive circuit case, since
the circuit elements would be completed from primitives like in chip
mask making, not by an assembly step needing a ref des.
The last time I did any gate-array design, we were still building
mainframe CPU's with ECL. But... back then Amdahl had one of the most
high-productivity schematic editors I have ever used. And it ran on a
52-line by 80 column text terminal. It was productive for two reason:

1) Component stacking with powerful bus rippers. I could specify a
32-bit latch with 4 parity bits (36 bits all together) by instantiating
a single symbol, and putting in a stack-count of 36. It appeared on
plotted (micro-fische vector art) schematic prints as a single block,
with a "36" displayed by breaking the bottom line of the ANSI-compliant
schematic symbol and placing the text there. I'll try some ascii-art:

+----+
| |
| +--OUT[0-31;P0-P3]
| |
+-36-+

sorta like that, only prettier. Then, the signal name could be:
OUT[0-31;P0-p3] which would rip latch zero to bit 0, latch 1 to bit 1,
etc. You could specify swizzles: [0-30:2,1-31:2] meant count by two's:
[0,2,4,6...1,3,5...]

The refdes for a component stack was something like U31.7, the stack
item number being after the dot.

And your point about refdes on individual transistors is well-taken.
The Amdahl had hierarchical refdes's U3/U7.3/U4.4 (except they were
meaningful names, not "U", because we'd have gone loony tracking a CPU
full of stuff otherwise). And at any level, the system could generate
instance names like I$007 for instance names that you didn't care about;
but of course the rest of the CAD system wanted a unique instance name
on every little thing.

2) The second reason for productivity is that the attributes were a
completely separate database, and was under independent revision
control. Attributes were joined (very much a database join) to
instances by refdes. So you could have one schematic, and experiment
with different versions of the attached attributes very easily (which
was important for trying various place/route strategies in that day and
age.)

-dave
Peter Clifton
2014-07-09 12:58:44 UTC
Permalink
Alternatively, take the view that the variations are infact distinct footprints.

OR.. That the variations could (for some cases) be applied in a mapping / post processing step during CAM export.

The features you mention are not ones I've seen in any other eda package, so it might be useful to see whether / how they address these problems.

I've personally never had a board vendor want changes in gerber data to accommodate manufacturing processes. This is generally something they can do themselves using their CAM software if they need.

In the first instance, we need to extend our support for generating objects and geometry across all necessary layers. Making that adjustable for different cases seems like an orthogonal problem at this point.


---
Peter Clifton <***@clifton-electronics.co.uk >

(Sent from my phone)

-------- Original message --------
From: Peter Stuge <***@stuge.se>
Date:09/07/2014 09:25 (GMT+00:00)
To: geda-***@delorie.com
Subject: Re: [geda-user] pour clearing around pads
So the point of the above paragraph is, yes, I can suggest some extensions,
and now would be a good time to capture that since I am trying to wrap my
1. write up some straw-man spec extensions
2. update the "footprint creation for.." document with what ever settles
out of that.
Thanks for that!

I strongly agree with a unified data model effort.

But - in order for a database to become meaningful it needs to
explicitly support all "desired variations" already in use, and
probably have hooks to add new ones. Some examples of such desired
variations are:

* extending smd pads away from packages for easier hand soldering
* different paste aperture margins depending on stencil process
* bumping unplated drill diameters when fab only supports plated holes
* chip vendors having slightly different package specs for same package

and definitely many more.

Each variation needs to be a single check-box in a user interface,
which can be switched on and off at any time. Most variations have
parameters, which also need to be represented in the data model, as
variation profiles, so that users simply select the right fab profile
and done.

Crowdsourcing the data is important, ideally allowing publication
from within the app itself, and certainly allowing
installation/updating of data from within the app itself. Think
Firefox extensions. A similar process needs to be supported for
adding a new fab profile, which might contain parameters for several
standardized desired variations, and possibly add fab-specific
variations.

Developing and maintaining such a data model is no simple nor small
task. I'll help work on it.
Investigate the feasibility of implementing the extensions.  I simply
don't know the code well enough.
I don't think any existing code fits a unified database and I think
it's OK to create an ambitious project to change that in the medium
to long term. The data model is the first step though, code comes
(much?) later.


Of course this is all no solution for the immediate term, but an
investment in open source tooling that might only really pay off
in a decade or three.


//Peter
DJ Delorie
2014-07-09 17:57:13 UTC
Permalink
Post by Peter Clifton
Alternatively, take the view that the variations are infact distinct footprints.
In my blue-sky on the subject, I mentioned a database of selection
criteria, and the criteria could be part-specific or project-wide. So
if you had a field for "hand-solderable" that selected between normal
footprints for reflow, or extended pads for home soldering, you could
use that field to select alternate footprints.

But that assumes you have a fairly complex database mapping groups of
symbols to groups of components which select groups of footprints.
*That* I've used before, way back when, but it was a very small
database.
Post by Peter Clifton
OR.. That the variations could (for some cases) be applied in a
mapping / post processing step during CAM export.
There's no reason why footprints can't be dynamically generated based
on parameters. We started with the m4 library and migrated to a fixed
library to better support Windows and the parts library dialog, but if
we can come up with a better way of doing it...
Dave Curtis
2014-07-09 21:53:14 UTC
Permalink
Post by DJ Delorie
Post by Peter Clifton
OR.. That the variations could (for some cases) be applied in a
mapping / post processing step during CAM export.
There's no reason why footprints can't be dynamically generated based
on parameters. We started with the m4 library and migrated to a fixed
library to better support Windows and the parts library dialog, but if
we can come up with a better way of doing it...
In general I'm skeptical of on-the-fly dynamic generation of
footprints. It gets hard to hand-tweak to repair some glitch some
where, and now there is *yet* *another* version tracking problem if you
want to recreate a specific design.

I'm more inclined to the idea of a generator script plus a rules data
base yielding a library of static footprints. New design rules? No
problem.. mkdir; chdir; execute script. Now you have a new static
library. I'll admit that this has scaling issues at the enterprise
level, but I think it scales better than on-the-fly generation version
control issues.
Loading...