Large Lists - Please read and comment |
|
Author |
Message |
zenny

|
Posted: Thu Mar 17 09:08:17 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
Hello,
I am starting a new thread with a snippet from another topic. This concerns
database design and user interaction so I thought it might be good to hear
from some of you about this topic.
Dan Freeman said...
>When you think of it, large lists aren't very practical anyway. I *hate*
>having to scroll through large lists of files in Windows explorer.
>Say your client is paying an invoice. Chances are good they have the
>invoice
>in their hands so ask them who they're paying and bring back the
>outstanding
>invoices for that vendor.
My problem is that there are features that can only be implemented when you
can browse the entire database as the application allows now. I will grant
you that these features can be "wedged" into a client/server database but
the "full table browse" approach is more natural.
An example:
My system is a multi-company (over 200 companies) accounts receivable
system. I have invoice and payment transactions for over 200 different
companys each of whom sells to thousands of customers (so it is no surprise
that I will soon be facing the 2,000,000 record barrier).
Transactions are often incorrectly posted and must be found. They can be
incorrect in a number of ways such as wrong company, wrong customer, wrong
amount, wrong invoice number, wrong check number, wrong check amount, wrong
invoice data, etc.
My search utility is basically a grid control (browse) of the entire table
that allows sorting and searching by any of these fields. Users search for
the information and then browse through information that is close. Because
everything is done with DBFs and index tags there is no delay when they
change search criteria and sort order.
An SQL database that demands requerying for each search would slow down the
application and provide fewer options.
How do some of you C/S people or opponents of "Large Lists" approach this
problem in an SQL type of environment.
Have any of you had to transition users from "Large List" applications to
C/S applications? How did it go?
Exchange Server6
|
|
|
|
 |
Fred

|
Posted: Thu Mar 17 09:08:17 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
Prevent the entry of wrong customer, etc, as much as possible before hand.
<s>
You really have to get off of the "show me everything" track for an
effective client server approach. Use multiple pieces of info to narrow the
search.
--
Fred
Microsoft Visual FoxPro MVP
> Hello,
>
> I am starting a new thread with a snippet from another topic. This
> concerns database design and user interaction so I thought it might be
> good to hear from some of you about this topic.
>
> Dan Freeman said...
>>When you think of it, large lists aren't very practical anyway. I *hate*
>>having to scroll through large lists of files in Windows explorer.
>
>>Say your client is paying an invoice. Chances are good they have the
>>invoice
>>in their hands so ask them who they're paying and bring back the
>>outstanding
>>invoices for that vendor.
>
> My problem is that there are features that can only be implemented when
> you can browse the entire database as the application allows now. I will
> grant you that these features can be "wedged" into a client/server
> database but the "full table browse" approach is more natural.
>
> An example:
> My system is a multi-company (over 200 companies) accounts receivable
> system. I have invoice and payment transactions for over 200 different
> companys each of whom sells to thousands of customers (so it is no
> surprise that I will soon be facing the 2,000,000 record barrier).
>
> Transactions are often incorrectly posted and must be found. They can be
> incorrect in a number of ways such as wrong company, wrong customer, wrong
> amount, wrong invoice number, wrong check number, wrong check amount,
> wrong invoice data, etc.
>
> My search utility is basically a grid control (browse) of the entire table
> that allows sorting and searching by any of these fields. Users search for
> the information and then browse through information that is close. Because
> everything is done with DBFs and index tags there is no delay when they
> change search criteria and sort order.
>
> An SQL database that demands requerying for each search would slow down
> the application and provide fewer options.
>
> How do some of you C/S people or opponents of "Large Lists" approach this
> problem in an SQL type of environment.
>
> Have any of you had to transition users from "Large List" applications to
> C/S applications? How did it go?
>
>
>
>
|
|
|
|
 |
Ook

|
Posted: Thu Mar 17 09:27:48 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
If you are working with a client servers system and large tables, it simply
is not feasible to display a list of any type with every entry in it. If the
users are used to having large lists and you are moving to client server,
the challenge is not how to make a large list work, the challenge is how to
educate your users. They want large lists, they can have them if they are
willing to wait 5 minutes for the query to run that populates their list.
Don't tell them no, give them choices: a) small lists with advanced search
features b) large lists that take forever to populate.
If you are working with local large tables, you can disply the entire list,
but you should include some sort of search mechanism so they can quickly
find the desired record without having to scroll through thousands of
records.
My philosophy is, if it works, then do it. Large lists are not evil, bad
performance is. If you can't do it with good performance, don't do it.
> Hello,
>
> I am starting a new thread with a snippet from another topic. This
concerns
> database design and user interaction so I thought it might be good to hear
> from some of you about this topic.
>
> Dan Freeman said...
> >When you think of it, large lists aren't very practical anyway. I *hate*
> >having to scroll through large lists of files in Windows explorer.
>
> >Say your client is paying an invoice. Chances are good they have the
> >invoice
> >in their hands so ask them who they're paying and bring back the
> >outstanding
> >invoices for that vendor.
>
> My problem is that there are features that can only be implemented when
you
> can browse the entire database as the application allows now. I will grant
> you that these features can be "wedged" into a client/server database but
> the "full table browse" approach is more natural.
>
> An example:
> My system is a multi-company (over 200 companies) accounts receivable
> system. I have invoice and payment transactions for over 200 different
> companys each of whom sells to thousands of customers (so it is no
surprise
> that I will soon be facing the 2,000,000 record barrier).
>
> Transactions are often incorrectly posted and must be found. They can be
> incorrect in a number of ways such as wrong company, wrong customer, wrong
> amount, wrong invoice number, wrong check number, wrong check amount,
wrong
> invoice data, etc.
>
> My search utility is basically a grid control (browse) of the entire table
> that allows sorting and searching by any of these fields. Users search for
> the information and then browse through information that is close. Because
> everything is done with DBFs and index tags there is no delay when they
> change search criteria and sort order.
>
> An SQL database that demands requerying for each search would slow down
the
> application and provide fewer options.
>
> How do some of you C/S people or opponents of "Large Lists" approach this
> problem in an SQL type of environment.
>
> Have any of you had to transition users from "Large List" applications to
> C/S applications? How did it go?
>
>
>
>
|
|
|
|
 |
Jeff

|
Posted: Thu Mar 17 09:31:51 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
Fred,
No disagreement there but I've got users who have been used to having "show
me everything" and have been working with it for close to 20 years. If it
weren't for the 2 million record limit I wouldn't even consider upsizing the
database. The performance of the VFP native tables is just fine.
As far as preventing wrong data from being entered, I'm doing the best I can
and constantly putting in new features to improve the accuracy but given the
volume of transactions, mistakes are inevitable and the ability to find them
is crucial.
> Prevent the entry of wrong customer, etc, as much as possible before hand.
> <s>
>
> You really have to get off of the "show me everything" track for an
> effective client server approach. Use multiple pieces of info to narrow
> the search.
>
> --
> Fred
> Microsoft Visual FoxPro MVP
>
>
>> Hello,
>>
>> I am starting a new thread with a snippet from another topic. This
>> concerns database design and user interaction so I thought it might be
>> good to hear from some of you about this topic.
>>
>> Dan Freeman said...
>>>When you think of it, large lists aren't very practical anyway. I *hate*
>>>having to scroll through large lists of files in Windows explorer.
>>
>>>Say your client is paying an invoice. Chances are good they have the
>>>invoice
>>>in their hands so ask them who they're paying and bring back the
>>>outstanding
>>>invoices for that vendor.
>>
>> My problem is that there are features that can only be implemented when
>> you can browse the entire database as the application allows now. I will
>> grant you that these features can be "wedged" into a client/server
>> database but the "full table browse" approach is more natural.
>>
>> An example:
>> My system is a multi-company (over 200 companies) accounts receivable
>> system. I have invoice and payment transactions for over 200 different
>> companys each of whom sells to thousands of customers (so it is no
>> surprise that I will soon be facing the 2,000,000 record barrier).
>>
>> Transactions are often incorrectly posted and must be found. They can be
>> incorrect in a number of ways such as wrong company, wrong customer,
>> wrong amount, wrong invoice number, wrong check number, wrong check
>> amount, wrong invoice data, etc.
>>
>> My search utility is basically a grid control (browse) of the entire
>> table that allows sorting and searching by any of these fields. Users
>> search for the information and then browse through information that is
>> close. Because everything is done with DBFs and index tags there is no
>> delay when they change search criteria and sort order.
>>
>> An SQL database that demands requerying for each search would slow down
>> the application and provide fewer options.
>>
>> How do some of you C/S people or opponents of "Large Lists" approach this
>> problem in an SQL type of environment.
>>
>> Have any of you had to transition users from "Large List" applications to
>> C/S applications? How did it go?
>>
>>
>>
>>
>
>
|
|
|
|
 |
Craig

|
Posted: Thu Mar 17 10:14:09 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
I've had the same problem. When I explain better techniques used with C/S,
and they try for a couple of months, they come to like the new way better.
Humans are generally resistant to change. You have to position it as
something better...which it really is.
--
Craig Berntson
MCSD, Visual FoxPro MVP
www.craigberntson.com
Salt Lake City Fox User Group
www.slcfox.org
www.foxcentral.net
> Fred,
>
> No disagreement there but I've got users who have been used to having
> "show me everything" and have been working with it for close to 20 years.
> If it weren't for the 2 million record limit I wouldn't even consider
> upsizing the database. The performance of the VFP native tables is just
> fine.
>
> As far as preventing wrong data from being entered, I'm doing the best I
> can and constantly putting in new features to improve the accuracy but
> given the volume of transactions, mistakes are inevitable and the ability
> to find them is crucial.
>
>> Prevent the entry of wrong customer, etc, as much as possible before
>> hand. <s>
>>
>> You really have to get off of the "show me everything" track for an
>> effective client server approach. Use multiple pieces of info to narrow
>> the search.
>>
>> --
>> Fred
>> Microsoft Visual FoxPro MVP
>>
>>
>>> Hello,
>>>
>>> I am starting a new thread with a snippet from another topic. This
>>> concerns database design and user interaction so I thought it might be
>>> good to hear from some of you about this topic.
>>>
>>> Dan Freeman said...
>>>>When you think of it, large lists aren't very practical anyway. I *hate*
>>>>having to scroll through large lists of files in Windows explorer.
>>>
>>>>Say your client is paying an invoice. Chances are good they have the
>>>>invoice
>>>>in their hands so ask them who they're paying and bring back the
>>>>outstanding
>>>>invoices for that vendor.
>>>
>>> My problem is that there are features that can only be implemented when
>>> you can browse the entire database as the application allows now. I will
>>> grant you that these features can be "wedged" into a client/server
>>> database but the "full table browse" approach is more natural.
>>>
>>> An example:
>>> My system is a multi-company (over 200 companies) accounts receivable
>>> system. I have invoice and payment transactions for over 200 different
>>> companys each of whom sells to thousands of customers (so it is no
>>> surprise that I will soon be facing the 2,000,000 record barrier).
>>>
>>> Transactions are often incorrectly posted and must be found. They can be
>>> incorrect in a number of ways such as wrong company, wrong customer,
>>> wrong amount, wrong invoice number, wrong check number, wrong check
>>> amount, wrong invoice data, etc.
>>>
>>> My search utility is basically a grid control (browse) of the entire
>>> table that allows sorting and searching by any of these fields. Users
>>> search for the information and then browse through information that is
>>> close. Because everything is done with DBFs and index tags there is no
>>> delay when they change search criteria and sort order.
>>>
>>> An SQL database that demands requerying for each search would slow down
>>> the application and provide fewer options.
>>>
>>> How do some of you C/S people or opponents of "Large Lists" approach
>>> this problem in an SQL type of environment.
>>>
>>> Have any of you had to transition users from "Large List" applications
>>> to C/S applications? How did it go?
>>>
>>>
>>>
>>>
>>
>>
>
>
|
|
|
|
 |
Dan

|
Posted: Thu Mar 17 11:16:22 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
> An SQL database that demands requerying for each search would slow
> down the application and provide fewer options.
Only if that's the way you write the app. <g>
Don't take this the wrong way, but you appear to be questioning everything
_except_ your own core assumptions. You're behaving like the proverbial guy
who only has a hammer. Everything looks like a nail.
Given the nature of your app (which is actually similar to ours), you have
another possible road. Partition your data horizontally.
We have a separate subdirectory for each "company", with identical data
structures in each. Even our largest customers, with thousands of
"companies" who write hundreds of thousands of checks daily with our
software, aren't anywhere close to the 2GB file size limit.
Performance is better with smaller data sets, maintenance is easier when
they can pack/reindex just one company at a time, support is easier because
we can retrieve a single "company" when a client has a problem.
Cross-company searches took a little creativity, but our customers are happy
with them. (Hint: Stonefield Query will save you a fortune!)
You've got a whole buncha options, but you're going to have to "think
outside the box" to take best advantage of them rather than trying to fit
the option to your pre-conceived model.
Dan
|
|
|
|
 |
Jeff

|
Posted: Thu Mar 17 11:42:50 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
> We have a separate subdirectory for each "company", with identical data
> structures in each. Even our largest customers, with thousands of
> "companies" who write hundreds of thousands of checks daily with our
> software, aren't anywhere close to the 2GB file size limit.
>
Wow did I get confused but in a way that turns out to be good for me. It
isn't 2 Million records but rather it is 2GB file size that is the limit. I
assume that this is for DBF's. CDX's and FPT can grow to whatever size is
necessary?
I'm nowhere near 2GB on my largest file. It is currently 1GB but there are
many unused fields and hardly used fields left over from other coders. I
could get this file down to .5 - .6 GB without deleting a single record.
What a relief!
Thanks for pointing this out as I now have one fewer big worry.
Regarding your actual suggestion I have considered several ways of breaking
this data up because the redundancy is horrible and a pain to manage. Don't
even get me started on some of the bone-headed things that were done. Like:
Customer name copied into the inventory file. No attempt at keeping it in
sync with the customer file.
All invoice data (Invoice Number, Date, Amount, Date Posted, and about 30
other fields copied into each debit or credit transaction for the invoice)
A 100 length character field that is used in about .5% of the records.
Meanwhile the "large list" is still a valid discussion even though it is not
for me an immediate problem. Using the large list approach I can't really do
views which limits what I can do in my grid control.
For example I have a SET RELATION between a TRANSACTION file and a
BOUNCEDCHECKS file. When that relationship turns up a hit I want to change
the DynamicBackColor of the column. It works sometimes and its a bit awkward
as to how it works when it does work. A view would collect all the data into
one pseudo table and the DynamicBackColor would be fine but views and large
lists don't mix (at least as far as I know)
Thanks again.
Jeff
|
|
|
|
 |
John

|
Posted: Thu Mar 17 13:57:19 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
Hey Jeff,
It sounds like Dan may have given you other options, but when the day comes
that you can't present the whole list, as Craig and Ook touched on, users do
what they're told and adjust more rapidly than their initial righteous
indignation might indicate.
Personally, I like the approach of "You can: 1) stop having any new business
and keep using this function--until the company closes; 2) spend millions of
dollars buying VFP from MS and hiring developers to overcome the 2GB
limit--until the company closes or 3) learn a little different way of doing
things and continue to be employed." <g>
Okay, so you don't have to be snide about it, but letting them know that
it's done as a necessary response to growth and getting them involved in the
discussions of how they use the system often go a loooong way into smoothing
such a transition.
- John
> Hello,
>
> I am starting a new thread with a snippet from another topic. This
> concerns database design and user interaction so I thought it might be
> good to hear from some of you about this topic.
>
> Dan Freeman said...
>>When you think of it, large lists aren't very practical anyway. I *hate*
>>having to scroll through large lists of files in Windows explorer.
>
>>Say your client is paying an invoice. Chances are good they have the
>>invoice
>>in their hands so ask them who they're paying and bring back the
>>outstanding
>>invoices for that vendor.
>
> My problem is that there are features that can only be implemented when
> you can browse the entire database as the application allows now. I will
> grant you that these features can be "wedged" into a client/server
> database but the "full table browse" approach is more natural.
>
> An example:
> My system is a multi-company (over 200 companies) accounts receivable
> system. I have invoice and payment transactions for over 200 different
> companys each of whom sells to thousands of customers (so it is no
> surprise that I will soon be facing the 2,000,000 record barrier).
>
> Transactions are often incorrectly posted and must be found. They can be
> incorrect in a number of ways such as wrong company, wrong customer, wrong
> amount, wrong invoice number, wrong check number, wrong check amount,
> wrong invoice data, etc.
>
> My search utility is basically a grid control (browse) of the entire table
> that allows sorting and searching by any of these fields. Users search for
> the information and then browse through information that is close. Because
> everything is done with DBFs and index tags there is no delay when they
> change search criteria and sort order.
>
> An SQL database that demands requerying for each search would slow down
> the application and provide fewer options.
>
> How do some of you C/S people or opponents of "Large Lists" approach this
> problem in an SQL type of environment.
>
> Have any of you had to transition users from "Large List" applications to
> C/S applications? How did it go?
>
>
>
>
|
|
|
|
 |
Jan

|
Posted: Thu Mar 17 17:41:45 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
Ok Guys,
I've read all the posts on this topic and mostly the thrust is :-
- We want to use large lists (some of us anyway)
- Customers are used to the look and feel
- What happens when the data outgrows our database capabilities
- How can we do this smarter.
Some very good suggestions were made and I wouldn't disagree with any of
them entirely.
Here is my take on it.
Firstly, if you have that many companies entrenched in one database you have
maximum exposure to calamity when data corruption occurs or hardware fails,
so split the companies into their own database (regardless of vfp dbc or C/S
Sql) and house these companies on separate machines as far as practical
(hardware & OS is cheap compared to data). Structure the app to select the
company and thus access the appropriate server/database.
Secondly, some parts of an app work as lists (large or otherwise) other
parts should be selected ie. Products or customers can be lists, but only
when requested so allow the user to search for a customer but give him a
button to open the list. In the case of products allow the user to key in a
product code and once again give them the option of opening the list. List
should always allow searches on multiple keys. So taking this further, when
maintain product or customer details use a list (grid with a search
facility). When processing transactions use a search box with a list
available via a button. This approach avoids queries and requeries of the
data. As for processing data incorrectly the access to that data should
ensure that it is difficult to post transactions to the wrong
company/customer. This approach will not alienate those who are used to a
purely list driven approach.
Lastly, to ensure data is captured correctly, use your app as a dialogue, so
You want to pay your "ABC" bill, what is your name, Mr Smith what is your
phone number? Thanks , your current balance appears to be .... is this
correct. how much are you paying ... and so on.
Jeff, in your specific situation, obviously some major redevelopment would
be of immense benefit, but you probably could play with these suggestions on
a small scale and try it out on your users without "imposing" a new solution
on them.
Well that my tuppence worth.
Rgds
Jan
> Hello,
>
> I am starting a new thread with a snippet from another topic. This
> concerns database design and user interaction so I thought it might be
> good to hear from some of you about this topic.
>
> Dan Freeman said...
>>When you think of it, large lists aren't very practical anyway. I *hate*
>>having to scroll through large lists of files in Windows explorer.
>
>>Say your client is paying an invoice. Chances are good they have the
>>invoice
>>in their hands so ask them who they're paying and bring back the
>>outstanding
>>invoices for that vendor.
>
> My problem is that there are features that can only be implemented when
> you can browse the entire database as the application allows now. I will
> grant you that these features can be "wedged" into a client/server
> database but the "full table browse" approach is more natural.
>
> An example:
> My system is a multi-company (over 200 companies) accounts receivable
> system. I have invoice and payment transactions for over 200 different
> companys each of whom sells to thousands of customers (so it is no
> surprise that I will soon be facing the 2,000,000 record barrier).
>
> Transactions are often incorrectly posted and must be found. They can be
> incorrect in a number of ways such as wrong company, wrong customer, wrong
> amount, wrong invoice number, wrong check number, wrong check amount,
> wrong invoice data, etc.
>
> My search utility is basically a grid control (browse) of the entire table
> that allows sorting and searching by any of these fields. Users search for
> the information and then browse through information that is close. Because
> everything is done with DBFs and index tags there is no delay when they
> change search criteria and sort order.
>
> An SQL database that demands requerying for each search would slow down
> the application and provide fewer options.
>
> How do some of you C/S people or opponents of "Large Lists" approach this
> problem in an SQL type of environment.
>
> Have any of you had to transition users from "Large List" applications to
> C/S applications? How did it go?
>
>
>
>
|
|
|
|
 |
Fred

|
Posted: Thu Mar 17 19:21:40 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
What 2 million record limit? VFP has no such limitation. The limit is 2GB
per file, (2GB for the .DBF, 2GB, for the .FPT, 2GB for the .CDX, etc).
--
Fred
Microsoft Visual FoxPro MVP
> Fred,
>
> No disagreement there but I've got users who have been used to having
> "show me everything" and have been working with it for close to 20 years.
> If it weren't for the 2 million record limit I wouldn't even consider
> upsizing the database. The performance of the VFP native tables is just
> fine.
>
> As far as preventing wrong data from being entered, I'm doing the best I
> can and constantly putting in new features to improve the accuracy but
> given the volume of transactions, mistakes are inevitable and the ability
> to find them is crucial.
>
>> Prevent the entry of wrong customer, etc, as much as possible before
>> hand. <s>
>>
>> You really have to get off of the "show me everything" track for an
>> effective client server approach. Use multiple pieces of info to narrow
>> the search.
>>
>> --
>> Fred
>> Microsoft Visual FoxPro MVP
>>
>>
>>> Hello,
>>>
>>> I am starting a new thread with a snippet from another topic. This
>>> concerns database design and user interaction so I thought it might be
>>> good to hear from some of you about this topic.
>>>
>>> Dan Freeman said...
>>>>When you think of it, large lists aren't very practical anyway. I *hate*
>>>>having to scroll through large lists of files in Windows explorer.
>>>
>>>>Say your client is paying an invoice. Chances are good they have the
>>>>invoice
>>>>in their hands so ask them who they're paying and bring back the
>>>>outstanding
>>>>invoices for that vendor.
>>>
>>> My problem is that there are features that can only be implemented when
>>> you can browse the entire database as the application allows now. I will
>>> grant you that these features can be "wedged" into a client/server
>>> database but the "full table browse" approach is more natural.
>>>
>>> An example:
>>> My system is a multi-company (over 200 companies) accounts receivable
>>> system. I have invoice and payment transactions for over 200 different
>>> companys each of whom sells to thousands of customers (so it is no
>>> surprise that I will soon be facing the 2,000,000 record barrier).
>>>
>>> Transactions are often incorrectly posted and must be found. They can be
>>> incorrect in a number of ways such as wrong company, wrong customer,
>>> wrong amount, wrong invoice number, wrong check number, wrong check
>>> amount, wrong invoice data, etc.
>>>
>>> My search utility is basically a grid control (browse) of the entire
>>> table that allows sorting and searching by any of these fields. Users
>>> search for the information and then browse through information that is
>>> close. Because everything is done with DBFs and index tags there is no
>>> delay when they change search criteria and sort order.
>>>
>>> An SQL database that demands requerying for each search would slow down
>>> the application and provide fewer options.
>>>
>>> How do some of you C/S people or opponents of "Large Lists" approach
>>> this problem in an SQL type of environment.
>>>
>>> Have any of you had to transition users from "Large List" applications
>>> to C/S applications? How did it go?
>>>
>>>
>>>
>>>
>>
>>
>
>
|
|
|
|
 |
Craig

|
Posted: Fri Mar 18 09:30:17 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
It's 2 Gig per file, so CDX and FPT have the same limitation. Unused fields
won't help you any as VFP stores data in fixed length fields, even if the
field is empty.
--
Craig Berntson
MCSD, Visual FoxPro MVP
www.craigberntson.com
Salt Lake City Fox User Group
www.slcfox.org
www.foxcentral.net
>> We have a separate subdirectory for each "company", with identical data
>> structures in each. Even our largest customers, with thousands of
>> "companies" who write hundreds of thousands of checks daily with our
>> software, aren't anywhere close to the 2GB file size limit.
>>
> Wow did I get confused but in a way that turns out to be good for me. It
> isn't 2 Million records but rather it is 2GB file size that is the limit.
> I assume that this is for DBF's. CDX's and FPT can grow to whatever size
> is necessary?
>
> I'm nowhere near 2GB on my largest file. It is currently 1GB but there are
> many unused fields and hardly used fields left over from other coders. I
> could get this file down to .5 - .6 GB without deleting a single record.
>
> What a relief!
>
> Thanks for pointing this out as I now have one fewer big worry.
>
> Regarding your actual suggestion I have considered several ways of
> breaking this data up because the redundancy is horrible and a pain to
> manage. Don't even get me started on some of the bone-headed things that
> were done. Like:
>
> Customer name copied into the inventory file. No attempt at keeping it in
> sync with the customer file.
>
> All invoice data (Invoice Number, Date, Amount, Date Posted, and about 30
> other fields copied into each debit or credit transaction for the invoice)
>
> A 100 length character field that is used in about .5% of the records.
>
> Meanwhile the "large list" is still a valid discussion even though it is
> not for me an immediate problem. Using the large list approach I can't
> really do views which limits what I can do in my grid control.
>
> For example I have a SET RELATION between a TRANSACTION file and a
> BOUNCEDCHECKS file. When that relationship turns up a hit I want to change
> the DynamicBackColor of the column. It works sometimes and its a bit
> awkward as to how it works when it does work. A view would collect all the
> data into one pseudo table and the DynamicBackColor would be fine but
> views and large lists don't mix (at least as far as I know)
>
> Thanks again.
>
> Jeff
>
|
|
|
|
 |
Jeff

|
Posted: Fri Mar 18 10:02:39 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
I understand but removing the unused fields from the database will reclaim
the space.
Thanks,
Jeff
> It's 2 Gig per file, so CDX and FPT have the same limitation. Unused
> fields won't help you any as VFP stores data in fixed length fields, even
> if the field is empty.
>
> --
> Craig Berntson
> MCSD, Visual FoxPro MVP
> www.craigberntson.com
> Salt Lake City Fox User Group
> www.slcfox.org
> www.foxcentral.net
>
>
>>> We have a separate subdirectory for each "company", with identical data
>>> structures in each. Even our largest customers, with thousands of
>>> "companies" who write hundreds of thousands of checks daily with our
>>> software, aren't anywhere close to the 2GB file size limit.
>>>
>> Wow did I get confused but in a way that turns out to be good for me. It
>> isn't 2 Million records but rather it is 2GB file size that is the limit.
>> I assume that this is for DBF's. CDX's and FPT can grow to whatever size
>> is necessary?
>>
>> I'm nowhere near 2GB on my largest file. It is currently 1GB but there
>> are many unused fields and hardly used fields left over from other
>> coders. I could get this file down to .5 - .6 GB without deleting a
>> single record.
>>
>> What a relief!
>>
>> Thanks for pointing this out as I now have one fewer big worry.
>>
>> Regarding your actual suggestion I have considered several ways of
>> breaking this data up because the redundancy is horrible and a pain to
>> manage. Don't even get me started on some of the bone-headed things that
>> were done. Like:
>>
>> Customer name copied into the inventory file. No attempt at keeping it in
>> sync with the customer file.
>>
>> All invoice data (Invoice Number, Date, Amount, Date Posted, and about 30
>> other fields copied into each debit or credit transaction for the
>> invoice)
>>
>> A 100 length character field that is used in about .5% of the records.
>>
>> Meanwhile the "large list" is still a valid discussion even though it is
>> not for me an immediate problem. Using the large list approach I can't
>> really do views which limits what I can do in my grid control.
>>
>> For example I have a SET RELATION between a TRANSACTION file and a
>> BOUNCEDCHECKS file. When that relationship turns up a hit I want to
>> change the DynamicBackColor of the column. It works sometimes and its a
>> bit awkward as to how it works when it does work. A view would collect
>> all the data into one pseudo table and the DynamicBackColor would be fine
>> but views and large lists don't mix (at least as far as I know)
>>
>> Thanks again.
>>
>> Jeff
>>
>
>
|
|
|
|
 |
Jeff

|
Posted: Fri Mar 18 12:25:56 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
Jan,
I agree with what you said but there are some issues that I need to resolve
in my application which may or may not be resolvable with anything other
than a "large list" approach.
Although we do the AR for 200 companies, many of them sell to the same
customers. There are several issues that come up in which we need to look at
the data by customer for all of our clients.
The first and most important is trying to spot payment problem patterns
which will show up in more than one company. I guess it is worth mentioning
that we also do the collections and check processing for our clients.
Example 1:
We get checks that don't bother to identify which company they are paying or
which invoice they are paying. We need to search by customer accross all
companies.
Example 2:
We get checks that not only are like example 1 above but don't even come
from the same company that the invoice went to. Searching all companies by
amount comes in really useful here.
Example 3:
Our collections people want to call the companies with the highest total
open balance first. Having the transactions in a single file really makes it
easy to pull this data together.
I've got more reasons why moving away from the "large list" would make
things difficult here but I think you get the idea from these three
examples.
Thanks for the input.
Jeff
|
|
|
|
 |
Dennis

|
Posted: Fri Mar 18 12:50:27 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
On Fri, 18 Mar 2005 11:02:39 -0500 in
microsoft.public.fox.programmer.exchange, "Jeff Grippe"
> I understand but removing the unused fields from the database will reclaim
> the space.
>
If the field is *never* used, then of course it should be removed
from the table to save space. You can't however, remove empty
fields from a row, since all the rows in the table must have the
same fields.
|
|
|
|
 |
Dennis

|
Posted: Fri Mar 18 12:57:56 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
On Fri, 18 Mar 2005 13:25:56 -0500 in
microsoft.public.fox.programmer.exchange, "Jeff Grippe"
> The first and most important is trying to spot payment problem patterns
> which will show up in more than one company. I guess it is worth mentioning
> that we also do the collections and check processing for our clients.
>
> Example 1:
> We get checks that don't bother to identify which company they are paying or
> which invoice they are paying. We need to search by customer accross all
> companies.
>
the you do a search, not a full table browse! you limit
retrieval to only the matching records.
> Example 2:
> We get checks that not only are like example 1 above but don't even come
> from the same company that the invoice went to. Searching all companies by
> amount comes in really useful here.
>
once again, this is a search, not a full table browse. the
search occurs on the server, and you only retrieve the matching
records.
> Example 3:
> Our collections people want to call the companies with the highest total
> open balance first. Having the transactions in a single file really makes it
> easy to pull this data together.
>
well, yes, but once again you do an SQL select statement that
retrieves ONLY the relevant records, in the order you want. you
don't need to browse the entire table.
> I've got more reasons why moving away from the "large list" would make
> things difficult here but I think you get the idea from these three
> examples.
>
and none of your examples require *browsing the entire table*. i
think that you're stuck with thinking about things the way they
have been done (very inefficiently) and not about the way they
*should* be done to have an efficient and robust system.
IMHO you really need to think about ways to get people *only* the
data they need to do their job, not dump tons of records on them
that they have to fish through to find what they need. The best
thing about SQL is that you can use queries to narrow down the
information, so that users don't have to see irrelevant data.
|
|
|
|
 |
Dan

|
Posted: Fri Mar 18 14:34:19 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
I agree with Dennis.
Each of your three cases are examples of a search, not examples of a "large
list" u/i. Imagine this conversation:
"I have a check for $200 but I don't know where to apply it"
"Here's every transaction on file. You figure out which one is $200."
Wouldn't a better response be:
"Here are all 10 transactions that are approximately $200. Which is yours?"
It's a matter of approach.
Dan
> Jan,
>
> I agree with what you said but there are some issues that I need to
> resolve in my application which may or may not be resolvable with
> anything other than a "large list" approach.
>
> Although we do the AR for 200 companies, many of them sell to the same
> customers. There are several issues that come up in which we need to
> look at the data by customer for all of our clients.
>
> The first and most important is trying to spot payment problem
> patterns which will show up in more than one company. I guess it is
> worth mentioning that we also do the collections and check processing
> for our clients.
>
> Example 1:
> We get checks that don't bother to identify which company they are
> paying or which invoice they are paying. We need to search by
> customer accross all companies.
>
> Example 2:
> We get checks that not only are like example 1 above but don't even
> come from the same company that the invoice went to. Searching all
> companies by amount comes in really useful here.
>
> Example 3:
> Our collections people want to call the companies with the highest
> total open balance first. Having the transactions in a single file
> really makes it easy to pull this data together.
>
> I've got more reasons why moving away from the "large list" would make
> things difficult here but I think you get the idea from these three
> examples.
>
> Thanks for the input.
>
> Jeff
|
|
|
|
 |
Jeff

|
Posted: Fri Mar 18 14:38:55 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
> and none of your examples require *browsing the entire table*. i
> think that you're stuck with thinking about things the way they
> have been done (very inefficiently) and not about the way they
> *should* be done to have an efficient and robust system.
I agree with most of what you said here. My users and I are both stuck with
thinking about how we do things now. It is worth mentioning, however, that
working with a single transaction table and many index tags has the virtue
of being blindingly fast.
The database has 25 in-house users and a few hundred web users hitting it
all day and the performance is just fine.
> IMHO you really need to think about ways to get people *only* the
> data they need to do their job, not dump tons of records on them
> that they have to fish through to find what they need. The best
> thing about SQL is that you can use queries to narrow down the
> information, so that users don't have to see irrelevant data.
I don't use the "large list" approach for everything. All of my reports use
SQL to fetch the results for printing/faxing/emailing. I only use the "large
list" for a few applications but those applications are critical to the
users who use them.
Thanks,
Jeff
|
|
|
|
 |
Jeff

|
Posted: Fri Mar 18 15:07:46 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
> Each of your three cases are examples of a search, not examples of a
> "large
> list" u/i. Imagine this conversation:
>
> "I have a check for $200 but I don't know where to apply it"
> "Here's every transaction on file. You figure out which one is $200."
You are overstating your case a bit because they don't really ever look at
more than one screen full at any time (which may be your point!) but...
With a large table and many indexes they can change the sort and do a search
and move to a different group of records rather quickly. This has all been
working quite well for years and I've only thought about changing it because
I thought (incorrectly) that I was running into a limit of VFP.
To change do a different way of querying my database really amounts to
designing the application to work the way the technology works which in my
opinion is never a great idea. It shouldn't be the database that decides how
I have to fetch my data. A more robust version of Fox tables would end this
discussion and allow me to do whatever I want using SQL when that works best
and SEEK / BROWSE when that's best.
|
|
|
|
 |
Anders

|
Posted: Fri Mar 18 07:41:27 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
There¨'s a 2GB limit on any type of file in VFP, be it DBF, CDX, FTP, or
TXT.
-Anders
> > We have a separate subdirectory for each "company", with identical data
> > structures in each. Even our largest customers, with thousands of
> > "companies" who write hundreds of thousands of checks daily with our
> > software, aren't anywhere close to the 2GB file size limit.
> >
> Wow did I get confused but in a way that turns out to be good for me. It
> isn't 2 Million records but rather it is 2GB file size that is the limit.
I
> assume that this is for DBF's. CDX's and FPT can grow to whatever size is
> necessary?
>
> I'm nowhere near 2GB on my largest file. It is currently 1GB but there are
> many unused fields and hardly used fields left over from other coders. I
> could get this file down to .5 - .6 GB without deleting a single record.
>
> What a relief!
>
> Thanks for pointing this out as I now have one fewer big worry.
>
> Regarding your actual suggestion I have considered several ways of
breaking
> this data up because the redundancy is horrible and a pain to manage.
Don't
> even get me started on some of the bone-headed things that were done.
Like:
>
> Customer name copied into the inventory file. No attempt at keeping it in
> sync with the customer file.
>
> All invoice data (Invoice Number, Date, Amount, Date Posted, and about 30
> other fields copied into each debit or credit transaction for the invoice)
>
> A 100 length character field that is used in about .5% of the records.
>
> Meanwhile the "large list" is still a valid discussion even though it is
not
> for me an immediate problem. Using the large list approach I can't really
do
> views which limits what I can do in my grid control.
>
> For example I have a SET RELATION between a TRANSACTION file and a
> BOUNCEDCHECKS file. When that relationship turns up a hit I want to change
> the DynamicBackColor of the column. It works sometimes and its a bit
awkward
> as to how it works when it does work. A view would collect all the data
into
> one pseudo table and the DynamicBackColor would be fine but views and
large
> lists don't mix (at least as far as I know)
>
> Thanks again.
>
> Jeff
>
>
|
|
|
|
 |
Gene

|
Posted: Tue Mar 22 12:04:25 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
>I agree with Dennis.
>
>Each of your three cases are examples of a search, not examples of a "large
>list" u/i. Imagine this conversation:
>
>"I have a check for $200 but I don't know where to apply it"
>"Here's every transaction on file. You figure out which one is $200."
>
>Wouldn't a better response be:
>
>"Here are all 10 transactions that are approximately $200. Which is yours?"
>
>It's a matter of approach.
Your leaves out the two $100 transactions that the $200 might be
payment for. Payment reconciliation is a bugbear, and Mr. Grippe has
it worse than most.
[snip]
Sincerely,
Gene Wirchenko
|
|
|
|
 |
Dennis

|
Posted: Tue Mar 22 13:01:51 CST 2005 |
Top |
Exchange Servers >> Large Lists - Please read and comment
On Fri, 18 Mar 2005 16:07:46 -0500 in
microsoft.public.fox.programmer.exchange, "Jeff Grippe"
> You are overstating your case a bit because they don't really ever look at
> more than one screen full at any time (which may be your point!) but...
>
which is EXACTLY the point! why retrieve data across the wire
that you're not going to even look at?
> With a large table and many indexes they can change the sort and do a search
> and move to a different group of records rather quickly. This has all been
> working quite well for years and I've only thought about changing it because
> I thought (incorrectly) that I was running into a limit of VFP.
>
so they're NOT using the whole file, they ARE using a GROUP of
records with a common characteristic. if you move the entire
file into the ram of their machine you are wasting a lot of
bandwidth (and speed!).
seriously you need to test some SQL queries just to see what we
mean. stop talking about it and try it! once you've tried a
cursor and seen how much faster the performance is, you won't
ever retrieve a whole table again!
|
|
|
|
 |
|
|