DBEU Phase II

Here is what is on the DBEU FAQ for Phase II:

Phase Two will be executed as part of the installation of the Banner XE applications such as Catalog, Schedule,  Faculty Grade Entry etc. This phase of the Database Extension Utility will ”enable” 2 of the columns added in Phase 1 (SURROGATE_ID and VERSION) and will only do this for tables that are associated with the Banner XE application being installed. Enabling the columns will consist of populating each column with a default value and updating each column to be NOT NULL. Triggers to populate these columns will also be added at this phase.

 

I was able to create a junk table with two not null columns temp1 and temp2.

I then created the following trigger:

 

CREATE OR REPLACE TRIGGER ft_junk

BEFORE INSERT ON JUNK

FOR EACH ROW

BEGIN

IF :NEW.temp2 IS NULL THEN

:NEW.temp2 := ‘test2’;

END IF;

END;

 

Then the following worked:

B8TST> insert into junk (temp1) values (‘test’);

 

1 row created.

 

B8TST> select * from junk;

 

TEMP1      TEMP2

———- ———-

test       test2

Posted in Uncategorized | Leave a comment

IITFEED Fun

The last 2 weeks I’ve had two deal with two instances where the daily ii jv feed didn’t balance.  Usually I get a call from controllers asking me to investigate.  Like almost always… both instances were due to an error in the original spreadsheet that was loaded into Banner.   For these two cases the datestamp for the particular VAxxxxxx jv was not the same across all seqno records.  These records all made it into the gurfeed table.  The first time this happened, fwpfedh.sql did not put header records into gurfeed.  These are the record type 1 records.  There should be one of these for each jv.  The second time this happened, fwpfedh.sql put header records into gurfeed but the total only went in for one of the timestamps.  So when FURFEED runs on the institution side it can’t handle it.

The first case left all the records in gurfeed and didn’t process anything.  To fix I added a header record manually with a correct total and then ran furfeed, fgrtrni, fgrtrnr,fgbactg in the institution instance.  I had to get the big schools to do it since I don’t have access to their databases.  The second case hit every institution and fed only the part of the jv that matched the header timestamp.  Since this posted, t required a new correcting jv referencing the old one to post the missing rows.  We deleted the orphan records in gurfeed.

This was a lot of work for everybody so I made a mod to iitfeed.pc to throw out any jvs that did not have a matching timestamp in all the seqno rows for a particular jv.  The jv should never load into owag to be split up.

Posted in IIJV FEED | Leave a comment

II JV Feed Notes (IITFEED)

II JV Feed

 

Overview:

The inter institutional jv feed is the most complex one we run.  JVs get input into owag via 2 methods: a form FWAIITD which institutions bring up in owag and spreadsheets from all the institutions and telecom.  Telecom, UO, and CO use the spreadsheets and the others use the form.  The spreadsheets can get up to 3100 lines of data.  Text that can go with a JV comes from the form FOATEXT in owag. The JV doc codes are typically VA######.  In the chps flat directory you will see files with J00xxxxxx which are not part of the iitfeed process.  These are related to chpsfeed which uses the same directory.  90% of issues with the ii jv feed come from the spreadsheets.  Anyway, there are 3 owag oracle scheduler jobs that run around 5 am.  Two are related to some pre and post feed reports.  The other, IITPROC_DAILY_JOB, runs the feeds.  The data from the form and spreadsheets get loaded into some custom tables in owag (fwbiitd, fwbiith, fwbfedh).  Then a process runs which splits them into several files for each institution and places them in the institution’s appropriate flat directory.  From there we handle feeding these files that came from the owag split into the 5th site databases.  The big 3 handle feeding these files into their databases themselves.  UO feeds theirs at 8 am, OSU at 3 pm, and PSU at 6 pm.  What you typically have is JVs moving from the schools to controller’s and jv’s from controller’s running to the schools(including chps).  The telecom stuff runs out to all the schools once a month.  There is a cutoff date set by Nick Miller a few days before period end which will not allow the II JVs to process if they are in the period but after the cutoff date.  This is in ftvsdax.  At year end there is also a setting to only allow controllers to send ii jvs.

 

Scripts in owag subversion:

iitproc.shl   – main one

iitchps.shl

iiteou.shl

iitoit.shl

iitsou.shl

iitwou.shl

Generated files in institution flat directories:

Ie.  /san/flat/wou/wouprd/fin/iitfeed/

fwbfdhe.dat

fwbtxte.dat

fwpfdhe1.lis

fwpfdhe2.lis

fwpfdhe4.lis

Issues:

Controller’s calls and can’t remember if they already uploaded a spreadsheet:

–          Go to /san/flat/ecs/chps/fin/iitfeed and look for it there and the arch directory.

Controller’s calls and ask if a certain VA####### loaded:

–          See if they ever uploaded the file if it’s from a spreadsheet.  Do a grep for it  in /san/flat/ecs/chps/fin/iitfeed/ and in the arch directory

–          Check the history table in owag, fwbfedh, for it.  You can get the date it loaded and whether or not it was placed in the institution files by the fwbfedh_transmitted_ind flag.

Controller’s calls and ask you to help them figure out why a JV didn’t load:

–          First see if it loaded.  Keep in mind the same JV can go to multiple institutions. If it was in a spreadsheet, make sure the file was there

–          If the file was there but you have no record in fwbfedh, go to the owag spool directory and do a ll *iit* and look through the files to see if you can find anything.  Also check fwpiitd.lis which will tell you what loaded into fwbfedh that morning and fwpfedh.lis which will tell yo what document went where that morning

–          Go to TreasOps/IITFEED folder in restricted share and look for fwpfeeed_error.lis file.  It will show any issues with loading the file into owag.

–          If it loaded and you see the flag on fwbfedh_transmitted_ind set to Y, go to the flat directory of an institution (say chps) and look for the furfeed and fgrtrnr files.  Ask controller’s if the JV ended up in suspense.

–          The spreadsheet consist of a type 1 header record and then 1 to many type 2 detail records for each JV.  A lot of times the total of the detail records doesn’t equal the total on the header record.  They sometimes forget a header record.  The sqlldr is position driven so sometimes they mess up a column width.  They may have tried to feed it before and forgot they did.  Sometimes they put in a blank line at the end of the file.

Controller’s calls and says they want to re-feed a file:

–          If it posted, you can’t re-feed the records that posted.  If it’s in suspense, controller’s has to remove it from all the banner instances that it went to before you re-feed.

–          Check the tables fwbiitd, fwbiith, and fwbfedh and remove records for the JV.

–          You want the morning job to take care of it and not run manually unless you have to.

Controller’s or UO calls you and says the feed bombed:

–          Review the iitproc log in the owag spool directory and figure out where it bombed.

–          Ask controller’s to send out an email to the campus business officers  informing them that the ii jv load failed that morning.  OSU does their institution load at 3 pm but the other two bigs do it in the morning so chances are they tried to load files from the previous day and had their process abort.

–          Chances are a spreadsheet caused a sqlld to abort

–          Remove file and run job manually on owag oracle scheduler unless controller’s wants to try and fix the file first

–          Ask controller’s to inform campuses when you’ve run it.

 

Posted in IIJV FEED | Leave a comment

Treasury Feeds

 

Overview:

The treasury feeds run 4 overall shell scripts: fwptrek,fwptred, fwrckrr, and chkload.  Fwptrek downloads a check file from US Bank and eventually loads it into the table fwbckrc.  Fwrckcc is also run in this script which is a critical report for controller’s (questionable checks).  This runs on the owag oracle scheduler at noon every weekday except for US Federal Holidays.  It can be run manually from owag job submission.   Fwptred downloads the direct deposits file from US Bank and runs fwrbssab to load it into table fwbbsdp.  It runs on owag oracle scheduler at 8 pm.  It can also be run directly via owag job submission.    Fwrckrr and chkload are job submission programs that controller’s runs.  Fwrckrr breaks up the check records in the table fwbckrc  into separate flat files  for the 5th sites and OSU.  There’s a payroll file and a regular check file.  Once the files are there, controller’s runs chkload to load individual 5th site Banner instances.  The OSU ones are just moved and archived but not loaded.

 

Issues:

Although occasionally there is no file from the treasury to download at noon, most of the issues on this come from chkload.  Chkload used to be run 1 institution at a time but we gave them the option to select them all and so if the script aborts in one institution file it can hold up the rest after it.    Check in the /san/flat/ecs/owag/fin/treasury/chkrec/ directory for errors as well as the owag spool directory.  The file that has a bad record will process everything but that record so you don’t want to process it again.  You have to manually archive the file and send an email to controller’s with the bad record.  Once the bad file is archived they can run chkload again to finish the rest of the files.

Posted in Treasury | Leave a comment

PCARD Flow SUMMARY

PCARD has an am process and a pm process which runs on the owag oracle scheduler.  These processes can be run via job submission manually on owag also.

 

The shell scripts are in the owag subversion directories and are:

> ll pcard*

-rwxr-xr-x 1 wendlerb ecs_dev 5092 Feb 27 08:51 pcard_co_am.shl

-rwxr-xr-x 1 wendlerb ecs_dev 2552 Feb 19 08:29 pcard_co_pm.shl

-rwxr-xr-x 1 wendlerb ecs_dev 4958 Mar  4 13:13 pcard_eou_am.shl

-rwxr-xr-x 1 wendlerb ecs_dev 2537 Feb 19 08:30 pcard_eou_pm.shl

-rwxr-xr-x 1 wendlerb ecs_dev 4965 Jun 19 08:08 pcard_oit_am.shl

-rwxr-xr-x 1 wendlerb ecs_dev 2569 Feb 19 08:31 pcard_oit_pm.shl

-rwxr-xr-x 1 wendlerb ecs_dev 4962 Jul  2 09:04 pcard_sou_am.shl

-rwxr-xr-x 1 wendlerb ecs_dev 2494 Jul  2 09:07 pcard_sou_pm.shl

-rwxr-xr-x 1 wendlerb ecs_dev 4916 Jun 17 09:37 pcard_wou_am.shl

-rwxr-xr-x 1 wendlerb ecs_dev 2488 Feb 19 08:32 pcard_wou_pm.shl

The am process:

Connects to proxy serenity with a separate institution account for the school and runs a script on serenity which does a sftp transfer of the bankfile from US Bank.  The file is eventually transferred to the appropriate flat directory on js1 and given the name bankfile.txt.  Next the file gets sql loaded into a temp table and has some manipulation done to the data via zfpcard300 and zfpcard400.  Fapcard runs, then furfeed,fgrtrni, and fgrtrnr.  Some reports are generated and a process kicks off to email the vendors. Ran into an issue with the email vendor piece (pcard_gurtklr_emails) because of an upgrade to student  which hit sokemal.  Right now there are two versions of this email script because not all schools are on newest version of student.

 

The pm process:

Sql script runs to push sweep date back a day.  Fapinvt runs which sweeps up the days transactions and creates the Z doc invoices.  Some reports get generated.

The usual suspects:

90% of the issues with this come when an institute forgets to enter a pcard number into Banner after they’ve issued it.  It shows up in the bankfile from US Bank and Banner has no record of the card.  When this happens the fapcard report will show an error and 0 records processed.  The furfeed lis will show nothing fed.  When this happens you need to get the school to enter the card into Banner.  After they do that you’ve got a number of steps to do:

1 Move the archived file from the archive directory back to the main directory and rename the file to bankfile.txt

Ie.  mv  /san/flat/wou/wouprd/fin/pcard/arch/bankfile20130627_data.lis   /san/flat/wou/wouprd/fin/pcard/bankfile.txt

2  Log into owag and run the am shell from job submission for the school.  Change parameter 1 from N to Y and then submit.

 

Never had an issue with the pm process.

Posted in PCard | Leave a comment

AP Direct Deposit Notes

AP Direct Deposit Run

 

FARINVS – report

–          Report that lists invoices headed for direct deposit and checks

FWAPACH – form

–          Lists count of prenotes and pending ach transactions

–          Record on fatckno is created with dummy starting number from a form trigger when step 1 is clicked which is !0000000.  If they click this button for step 1, the record gets created and will stay there if they decide not to run fabchks for some reason.

–          5 steps on form: fabchks,fwpdird,fabchkr,fzbmail, and fabchka.

–          Direct deposit ACH and FIS part of AR feed all use same basic flow process and all check this table before initiating.  The AR feed has a sql script in it to update a field on the fatckno record to the wrong type of value causing the feed to abort there with an oracle error when a record exists in fatckno before the FIS portion of the feed begins.  This is on purpose and pops up time to time.  AP needs to complete their check run or ACH run by the end of the day before the ar feed fires off in the evening.  You do not want the FIS portion of the AR feed processing checks when there are ones left in there from AP. ACH also populates this table but with a dummy value in the starting check number field.

 

FABCHKS – process

–          Populates temp check tables fatckin and fatckdt.  Check num is set to null on these records.

–          Normal run produces an empty lis file.  If the lis file has something in it, it will tell you some kind of error usually along the lines of duplicate record, etc.

–          If no fatckno record exists, this will still complete successfully but not do anything so look for that if no records show in the two tables like they should after running this.  Comes up sometimes when functional users wonder why they aren’t getting any direct deposits.

–          OUS has mods on this to prevent it from picking up student checks

–          Gxrdird record (bank info and status, ach,etc) has to be correct for the record to get picked up

FWPDIRD – process

–          Populates tables fabchks and fabinck.

–          Pulls records from fatckin and gxrdird

–          Output is a direct deposit ach file that gets sent to the treasury.  The file is saved to a flat directory location.  There is a shell script the schools run to actually transfer the file from the flat directory to the treasury

–          Sets each fatckin and fatckdt check num to a number beginning with !.

 

FABCHKR – report

–          Check register report

–          This lists the direct deposits.

FZBMAIL – process

–          Uses fatckno, fatckin, and fatckdt to email vendor notification.

–          The email addr type and seqno must match that on the fatckin record and be active.

FABCHKA – process

–          This submits the ach for posting .

–          Populates tables fab1099,fabchka, and fobappd

–          The output lis file is empty for a normal run.  If something is in it, you’ve got some kind of error.

–          The temp tables fatckin and fatckdt along with that initial table fatckno are cleared out.

–          Posting actually posts the ach.

Posted in ACH | Leave a comment

A/P Check Run Summary Notes

AP Check run

 

FARINVS – report

–          Report that lists invoices headed for direct deposit and checks

FAABATC – form

–          Creates record on fatckno with beginning check number

–          Record on fatckno is first checked before whole AP check run  initiates

–          Direct deposit ACH and FIS part of AR feed all use same basic flow process and all check this table before initiating.  The AR feed has a sql script in it to update a field on the fatckno record to the wrong type of value causing the feed to abort there with an oracle error when a record exists in fatckno before the FIS portion of the feed begins.  This is on purpose and pops up time to time.  AP needs to complete their check run or ACH run by the end of the day before the ar feed fires off in the evening.  You do not want the FIS portion of the AR feed processing checks when there are ones left in there from AP. ACH also populates this table but with a dummy value in the starting check number field.

 

FABCHKS – process

–          Populates temp check tables fatckin and fatckdt

–          Normal run produces an empty lis file.  If the lis file has something in it, it will tell you some kind of error usually along the lines of duplicate record, etc.

–          If no fatckno record exists, this will still complete successfully but not do anything so look for that if no records show in the two tables like they should after running this.  Comes up sometimes when functional users wonder why they aren’t getting any checks.

FABCHKP – process

–          Populates tables fabchks and fabinck.  In addition, it inserts a record in fatckno with the ending check number

–          Output is a lis file that looks somewhat like an unformatted check file.

–          The table fabinck is new and does allow us to regenerate a check file if you know the starting and ending check number after a check run is completed.  Used to be you were dead in the water.

FWPCHKP – process

–          Generates the check file with custom fonts and images that gets printed on the check stock

–          Each institution has it’s own custom images, signatures, ect.  and handles getting the micr font and custom macros to the printer memory differently.  Some like WOU have the maco and fonts stored internally in the printer so they can print to it directly from linux and others load them into the printer memory with some type of bat file and ftp this file to a local machine.

–          The file itself has a lot of PCL in it.  Each check starts with the same PCL header line so in this way you can edit this file if you only need to print a certain range of checks from it.  You can just delete the block of lines with the check you don’t want.

FABCHKR – report

–          Check register report

–          This lists the checks that got printed.

FABCHKA – process

–          This submits the checks for posting .

–          Populates tables fab1099,fabchka, and fobappd

–          The output lis file is empty for a normal run.  If something is in it, you’ve got some kind of error.

–          The temp tables fatckin and fatckdt along with that initial table fatckno are cleared out.

–          Posting actually posts the checks.

Posted in A/P Checks | Leave a comment

ii JV VA842810

Got a call from controllers that the day’s balance from the iijv feed was out of whack.  They thought it was do to doc VA842810.  First thing I did was to look at what loaded into the table on owag, fwbfedh. I found that there were VA842810 records for chart K, C, D, and F.  Further more, Chart K posted ok but the others did not. I looked in the san directory iitfeed archive to take a look at the files that are sent to the institution to load.  Examining the files I noticed that there was a missing header record for this JV.  The header record is a type 1 and has a VA8428101 in it while the other records are type 2 and have a VA8428102 in it.

So what happens when there is no header record in the file?  Sure enough, I found the VA842810 stuck in the gurfeed table at OSU,PSU, and SOU because there was no header record in gurfeed.  I had the institutions add a header record for it in gurfeed and it cleared the records out after furfeed and fgrtrni.

Now to figure out what caused it.  Information comes into owag for these iijv files either through an owag form or by an upload process which pulls files from /san/flat/ecs/chps/fin/iitfeed/.  I looked at the file that contained VA842810 and noticed the header record had a different timestamp than the other records.  I believe this is what caused it.

Posted in IIJV FEED | Leave a comment

SOU Bookstore Feed FZRBKST

Every once in a while SOU’s bookstore feed fails and I’ve decided to share the process of how I figure out what happened with some of the folks at SOU.  First thing is to find the job submission log in the spool directory.  It can give you some information approximately where the job failed.  The shell script writes most of it’s output in the folder invfeed on SOU’s flat directory on js1.  Unfortunately the first thing an end user typically does when they get a failure message in Banner is try to run the script again.  This overwrites or removes most of the files that will tell you what happened.  So I check to see if any data made it to the temp tables and remove it if there.  Then I move the archived data file back to the main directory, rename it properly, and have the user run again stopping after it aborts.

In a nut shell, the script reads the datafile (*bookinve.dat) and breaks it into 3 datafiles each of which gets sql loaded into a separate temp table (fwbinvh,fwrinvc,fwrinva).  Each of these loads has a log file with it.  95% of the time you can find the problem in one of these logs.  The job submission log will point you in this direction.  In the last failure, I saw output from the first load but nothing from the 2nd or 3rd.  So when I checked the 1st sql load log I found that a date column was erroring out.  The datafile had a date column in the wrong format.

If the 3 loads were successful, fwrinvf runs.    You should see a SOU_fwrierr.lis output from it in the invfeed folder.  Look for any errors in there.  After that, fgrtrni runs to mark the transactions postable.  The output file is normally empty.  If you see something in it that means you have an error.  If you have an error here all the feeds will be screwed up till we get it fixed.   Normally I have to delete the records in the fgbtrni table causing the error. Fgrtrnr runs to see if any of the posting ended up in suspense and then an email goes out with the SOU_fwrierr.lis in the body of the email.

 

Posted in Bookstore | Leave a comment

FWROUSF

Got an email telling me that only one chart was being selected in the TX2 file this month instead of all the charts.  Investigating I found that chart B was the only one showing up which is the first chart selected.  The other files produced had all the chart data just not the TX2 file.  No code had been changed since the month before when it worked correctly.  OWAG did however have an oracle upgrade since then.  The log said all the charts were being processed.

 

I put in some debug statements to try and get a handle on what was going on.  At first I thought maybe the cursor wasn’t closing anymore between charts.  The code used a report call to get the charts and then  another report call inside the first one to do the select_meta_data cursor and the select_hier_data cursor.  I actually commented out several processes so I could focus on the TX2 piece and limited the charts to E and F.  Turns out it was fetching all the rows for the first chart selected but then only fetching one row for each chart that followed.  It would fetch the one row and then exit the inner report call without even printing the record

I tried opening a differently named cursor for each chart but that didn’t seem to make a difference.  I tried closing the cursor before opening it and that didn’t help. Finally I rewrote the logic from using a report call inside a report call to having a direct procedure call inside the first report call.  That took care of it but I’m still not sure why it broke.

 

Posted in Aggregate | Leave a comment