Build Code Samples
This topic presents examples and code snippets of Spectre Build scripts.
With the introduction of the Spectre Build Flow Editor, you no longer need to manually edit Spectre build scripts. However, you still have the option to examine and edit the scripts in a text editor, so you might find these samples and notes useful.
The following example shows parameters for varying the input and output file names.
build { take-parameter "Input File" take-parameter "Output File" default="AllDistricts.cbase" take-parameter "District Code" default="integer" take-parameter "District Label" default="District #" text-input "$(Input File)" { column "District" type="$(District Code)" { property "Label" "$(District Label)" } } output "$(Output File)" }
The following sample Spectre Build script uses a text-input tag. It formats two date columns and a fixed100 number and replaces the value of another column. It also sets some columns to behave like classic Info Fields. For more information about the text-input tags, see Text, cBase, and Dive Input Build Tags.
build { text-input "demo_drl.txt" { column "Order Num" type="string" column "Date" type="integer" column "YearMo" type="period"calendar="gregorian month" format="YYYY/MM" column "Quarter" type="period"calendar="gregorian quarter" format="YYYY/Q" column "Customer" type="string" column "Customer Name" type="string" column "City/State/Zip" type="string" column "SIC Description" type="string" column "Salesperson" type="string" column "Sales Region" type="string" column "Product Family" type="string" column "Product Name" type="string" column "Units" type="integer" column "Cost" type="fixed100" column "Revenue" type="fixed100" format="$#,#.00" column "YearMo445" type="string"required-dimension="Date" column "YearMo454" type="string"required-dimension="Date" column "YearMo544" type="string"required-dimension="Date" column "SIC Code" type="string"required-dimension="SIC Description" column "Address1" type="string"required-dimension="Customer" } replace "Date" `date_from_date_number(value("Date"))` output "demo_drl.cbase" }
You can list multiple file names with the text-input tag. For example:
build { text-input "/di_data/2014*.dat" "/di_data/2015*.dat" "/di_data/2016*.dat"{ limit-similar-values-warnings 0 column "Account ID"type="string" column "Charge Code"type="string" column "Discharge Date"type="date"format="YYYY-MM-DD"input-format="YYYY-M-D" column "Encounter ID"type="string" column "Facility Code"type="string" column "Service Date"type="date"format="YYYY-MM-DD"input-format="YYYY-M-D" column "Service Patient Type"type="string" column "Source System"type="string"suggested-dimension=false column "Total Charges"type="double" column "Total Costs"type="double" add "Service Year" `substr(string(value("Service Date")),1,4)` add "Service Year-Mo" ``` period("gregorian month", substr(string(value("Service Date")),1,7))```format="yyyy-mm" add "Service Fiscal Year-Mo" ``` offset(value("Service Year-Mo"),"month", parse_integer(param("Fiscal_Month_Offset")))```format="yyyy-mm" add "Service Fiscal Year" `string(year(value("Service Fiscal Year-Mo")))` add "Service Month Name" `month_name(value("Service Date"))`suggested-dimension=false add "Service Day of Week" `day_name(value("Service Date"))`suggested-dimension=false add "Service Day of Month" ``` lpad(string(day(value("Service Date"))),2,"0")```suggested-dimension=false add "Service Week_Sort" ``` string(if(mod(date_number(value("Service Date")),7)<=1, mod(date_number(value("Service Date")),7) + 6,mod(date_number(value("Service Date")),7) - 1))```suggested-dimension=false include-other-columns } output "/staging/three_year.cbase" }
If the three years change over time, you can avoid hard coded values and instead use parameters for year_0, year_1, and year_2. For example:
take-parameter "year_0" take-parameter "year_1" take-parameter "year_2" text-input "/di_data/$(year_0)*.dat" "/di_data/$(year_1)*.dat" "/di_data/$(year_2)*.dat"
You can use a cBase as input to a Build script by using the cbase-input tag. This example also includes a cBase for a lookup operation.
build { take-parameter "build_start_month"default="201401" cbase-input "/staging/adj_010_keys.cbase" { filter `value("Posting Date") >= date(concat(param("build_start_month"),"01"),"YYYYMMDD")` remove "Discharge Date" } lookup { cbase-input "/staging/acct_020_flags_subset.cbase" { rename "Facility" "Account Facility" rename "Inpatient Facility" "Account Inpatient Facility" } key "Account ID" "Account ID" } output "/staging/adjustments.cbase" }
This example uses filters on both inputs to ensure that no data is duplicated. The cbase-input is filtered on Ship Date, as is the transaction text-input. Each input includes an add operation to insert a value for the Source column. The output cBase then clearly indicates the source of each record.
build { cbase-input "/cbases/commodities.cbase" { filter `value("Ship Date") <= date("2016/07/31")` add "Source" `"cbase"` } text-input "/data/commodities.txt" { column "Ship Date"type="date"format="YYYY/MM/DD"input-format="YYYY/M/D" column "Customer"type="string" column "Commodity"type="string" column "Supplier"type="string" column "Salesperson"type="string" column "Qty"type="integer" column "Sales"type="integer" column "GP"type="integer" column "Claims"type="fixed100" column "Commission"type="integer" filter `value("Ship Date") > date("2016/07/31")` add "Source" `"text"` } output "/cbases/commodities_append.cbase" }
Spectre builds are fast, so incremental builds are rarely needed. But, in certain cases, such as if you have high data volume or pressure on your database, you can use incremental builds if you want even more speed.
You could start by building the initial database. For example:
build { text-input "2017.txt" { column "Order Num"type="string" column "Date"type="date"format="YYYY-MM-DD"input-format="YYYY-M-D" column "Customer"type="string" column "Customer Name"type="string" column "City/State/Zip"type="string" column "SIC Description"type="string" column "Salesperson"type="string" column "Sales Region"type="string" column "Product Family"type="string" column "Product Name"type="string" column "Units"type="integer" column "Cost"type="fixed100" column "Revenue"type="fixed100" column "SIC Code"type="string" column "Address1"type="string" } output "data.cbase" }
The resulting cBase has only the year 2017 in it. If you want to add 2018 to the existing data, you can do this by taking the main cBase as input, taking the 2018 data as an additional input, and then writing the result back to the main cBase. This is usually much faster than re-reading the original 2017 text file. For example:
build { cbase-input "data.cbase" { filter `year(value("Date")) < 2018` } text-input "2018.txt" { column "Order Num"type="string" column "Date"type="date"format="YYYY-MM-DD"input-format="YYYY-M-D" column "Customer"type="string" column "Customer Name"type="string" column "City/State/Zip"type="string" column "SIC Description"type="string" column "Salesperson"type="string" column "Sales Region"type="string" column "Product Family"type="string" column "Product Name"type="string" column "Units"type="integer" column "Cost"type="fixed100" column "Revenue"type="fixed100" column "SIC Code"type="string" column "Address1"type="string" } output "data.cbase" }
Note that you need to filter out 2018 from the main cBase so that if you run this build more than once you will not get duplicate rows.
In this scenario, a cBase needs to be kept current by posting incremental extracts from the source. The following script reads a set of text files containing the records for the last 7 days from an extract, and adds the records to an existing cBase of previously extracted data. The field Patient URN is used as the key to identify records already in the cBase. It does this by using a lookup on the set of transaction files to identify new accounts, and then filters out any records in the cBase for accounts with new transactions (to avoid double counting). This algorithm, in effect, drops the duplicate records from the cBase, and re-adds them from the incremental extract, calculating account balances and other charge columns from the latest extracted data.
build { cbase-input "/data/patient-data.cbase" { add "Source" `"cbase"` lookup { text-input "/data/patient-data-*.dat" { column "Patient URN"type="string" sort { column "Patient URN" } add "dupe" `if(on_previous_row(value("Patient URN"))!=value("Patient URN"),"N","Y")` filter `value("dupe")="N"` remove "dupe" add "New File Flag" `"Y"` } key "Patient URN" "Patient URN" } filter ```(value("New File Flag")!="Y"and value("Source")="cbase")or value("Source")="text"``` remove "New File Flag" } text-input "/data/patient-data-*.dat" { column "Activity Date"type="date"format="YYYY-MM-DD"input-format="YYYYMMDD" column "Patient URN"type="string" column "Ancillary Charges Amount"type="string" column "Routine Charges Amount"type="string" column "Account Balance"type="string" column "AR Charges Amount"type="fixed100" add "Account Balance String" `value("Account Balance")` replace "Account Balance" ``` if(is_unknown(parse_fixed100(value("Account Balance"))), null,parse_fixed100(value("Account Balance"))) ``` replace "Ancillary Charges Amount" ``` if(is_unknown(value("Ancillary Charges Amount")), null,value("Ancillary Charges Amount")) ``` replace "Routine Charges Amount" ``` if(is_unknown(value("Routine Charges Amount")), null,value("Routine Charges Amount")) ``` replace "AR Charges Amount" ``` if(is_unknown(value("AR Charges Amount")), null,value("AR Charges Amount")) ``` add "Source" `"text"` } lookup { text-input "/temp/base/data_trigger.txt" { column "Yesterday"type="date"format="YYYY-MM-DD"input-format="YYYY-M-D" } } replace "_trigger_yesterday" ``` if(is_null(value("_trigger_yesterday")), value("Yesterday"),value("_trigger_yesterday")) ``` remove "Source" remove "Yesterday" output "/data/patient-data.cbase" }
The following simple Build script uses ODBC input. The script connects to a specific DSN called hospital_demo that runs a query to select all columns from the charge_master. This build puts the results of the query into a cBase—you do not need to name the fields. The script brings all of the fields that are in the table into the cBase and does its best to match the type of each field. For more information about the odbc-input tags, see ODBC Input Build Tags.
build { odbc-input { dsn "hospital_demo" query "select *from charge_master" } output "/cbases/test_charge_master.cbase" }
NOTE: If a query such as this encounters a text or variable length field with an unspecified maximum length, the field is capped at 32,768 characters.
The following odbc-input block has a query that causes all table columns to be pulled into the Spectre Build, which determines the data type. Three columns have overrides for their types using column-override. However, a date column defined in the SQL Server database table has SQL datatype nvarchar(25) and needs special handling. That is, to take an nvarchar(25) field from the database and move it to a date in the cBase, an explicit conversion must be performed. How to convert fields is database specific. If you select not to do it in the SQL query itself, you can do the conversion in the Build script after reading the data from the database. The example script shows how to use replace to convert the Effective Date nvarchar(25) column to a date type column for the cBase after it has been read.
build { odbc-input { dsn "ProdTbls" query "select *from [CMast]" column-override "ID"type="string" column-override "Cost per Unit"type="double" column-override "Exclude Flag"type="integer"suggested-dimension=true } replace "Effective date" `date(value("Effective Date"), "YYYY-MM-DD")` output "cmast-output.cbase" }
The following Build script uses an odbc-input tag. There are triple quotes around the query to allow the query string to span multiple lines in the script. Here several columns are selected from two tables joined by customer ID; two column types are altered from the database schema, several columns are renamed, and one new column is created.
build { odbc-input encoding="UTF8" { dsn "Northwind" query """ SELECT Orders.OrderID, Orders.OrderDate, Orders.RequiredDate, Orders.ShippedDate, Orders.Freight, Orders.ShipName, Orders.ShipAddress, Orders.ShipCity, Orders.ShipRegion, Orders.ShipPostalCode, Orders.ShipCountry, Customers.CompanyName as CustomerName, Customers.ContactName as CustomerContact, Customers.ContactTitle as CustomerContactTitle, Customers.Address as CustomerAddress, Customers.City as CustomerCity, Customers.Region as CustomerRegion, Customers.PostalCode as CustomerPostalCode, Customers.Country as CustomerCountry, Customers.Phone as CustomerPhone, Customers.Fax as CustomerFax FROM Orders, Customers WHERE Orders.CustomerId = Customers.CustomerId """ column-override "OrderID" type="string" column-override "OrderUnitPrice" type="fixed100" } add "Year" `year(value("OrderDate"))` output "/data/northwind.cbase" }
This ODBC example is similar to the preceding one, but this example has more tables and columns.
build { odbc-input { dsn "northwind" query """ SELECT Orders.OrderID, Orders.OrderDate, Orders.RequiredDate, Orders.ShippedDate, Orders.Freight, Orders.ShipName, Orders.ShipAddress, Orders.ShipCity, Orders.ShipRegion, Orders.ShipPostalCode, Orders.ShipCountry, [Order Details].Quantity, [Order Details].Discount, [Order Details].UnitPrice as OrderUnitPrice, Products.ProductName, Products.QuantityPerUnit, Products.UnitPrice as ProductUnitPrice, Products.UnitsInStock, Products.UnitsOnOrder, Products.Discontinued, Products.ReorderLevel, Categories.CategoryName, Categories.Description as CategoryDescription, Suppliers.CompanyName as SupplierName, Suppliers.ContactName as SupplierContact, Suppliers.ContactTitle as SupplierContactTitle, Suppliers.Address as SupplierAddress, Suppliers.City as SupplierCity, Suppliers.Country as SupplierCountry, Suppliers.Fax as SupplierFax, Suppliers.HomePage as SupplierHomePage, Suppliers.Phone as SupplierPhone, Customers.CompanyName as CustomerName, Customers.ContactName as CustomerContact, Customers.ContactTitle as CustomerContactTitle, Customers.Address as CustomerAddress, Customers.City as CustomerCity, Customers.Region as CustomerRegion, Customers.PostalCode as CustomerPostalCode, Customers.Country as CustomerCountry, Customers.Phone as CustomerPhone, Customers.Fax as CustomerFax, Shippers.CompanyName as ShipperName, Shippers.Phone as ShipperPhone FROM Orders, [Order Details], Products, Categories, Suppliers, Customers, Shippers WHERE Orders.OrderID = [Order Details].OrderID AND [Order Details].ProductID = Products.ProductID AND Products.CategoryID = Categories.CategoryID AND Products.SupplierID = Suppliers.SupplierID AND Orders.CustomerId = Customers.CustomerId AND Orders.ShipVia = Shippers.ShipperID """ column-override "OrderUnitPrice" type="fixed100" column-override "ProductUnitPrice" type="fixed100" } output "northwind.cbase" }
The following script produces a cBase with a row for each unique account_id for the year 2012.
build { odbc-input { dsn "hospital_demo"query """ select account_id,count(*) from charge_detail where posting_date>='2012-01-01' and posting_date<'2013-01-01' group by account_id """ } output "/cbases/extract_charge_detail.cbase" }
This build uses a parameter in the SQL query and in the filter.
build { take-parameter "ClientID" default="X999999"odbc-input { dsn "SQLProd" query """ SELECT [Master ID], [Payer Category], [Payer Class] FROM PayMaster where [Master ID] = ? """ { bind "$(ClientID)" } } filter `value("Master ID")=param("ClientID")` add "FinClassKey" `upper(value("Payer Category"))` output "/ClientDives/$(ClientID)/temp/PayerMaster.cbase" }
You can use a Dive as input to a Build script by using the dive-input tag.
build { dive-input "sales_region.dive" { add "Test Column" `"District String"+value("Sales Region")` } output "sales_region.cbase" }
Dive-input can be specified in-line in a Build file to replicate Integ-like squashing. For example:
build { dive-input { dive { cplan { text-input "detail.txt" } window { dimension "Item Number" column "Weighted Average Cost" column "FOB Cost" } } } // Can do other operations here, like // adding new calculated columns, // running lookups, filtering, & sorting output "summarized.cbase" }
A lookup inside a Build script can make use of dive-input. For example:
build { cbase-input "Sales.cbase" lookup { dive-input { dive { cplan "/cplans/transactions.cplan" window { dimension "Dept" dimension "Item Type" column "Total Charges" column "Total Payments" } } } } output "/cbases/SalesExpanded.cbase" }
You can pass a parameter from your Build script into the Dive script. For example:
build { take-parameter "file"default="insurances.txt" dive-input "insurances.dive" { pass-parameter "file" } output "insurances.cbase" }
Where the Dive script is similar to this:
dive { take-parameter "file"default="WRONG.TXT" cplan { text-input "$(file)" { column "accid"type="string" column "seqno"type="string" column "insurance"type="string" } } window { dimension "accid" column "maxseq" `max(value("seqno"))` } }
NOTE: You must declare the parameters in the Dive.
Here are useful expressions for doing manual date roll-ups in a build.
add "WeekYear" `period("iso8601 week", value("Order Date"))` add "MonthYear" `period("gregorian month", value("Date"))`format="YYYY-MM" add "QuarterYear" `period("gregorian quarter", value("Date"))` add "Year" `string(year(value("Ship Date")))`
NOTE: These can also be done with expression dimensions in a cPlan to avoid wasting space in the cBase or with ProDiver's standard Date Roll-up feature.
When using text files that contain dates, the best way to produce dates from date-numbers is to read them as an integer and then use the date_from_date_number function. For example:
build { text-input "demo_drl.txt" { column "Posting Date"type="integer" . . . } replace "Posting Date" `date_from_date_number(value("Posting Date"))` }
On the other hand, if you are dealing with a text file that has a date string in it, you would proceed differently. For example, suppose the text file has just two rows. Row one has a column header "Processing Date" and row two has a string value 2018-02-15. You want to pass that value to a Build script. You can accomplish this by declaring the parameter in the Build and then invoking the build with the value. In Production, you use a Parameter node with the Parameter file attribute set to the text file (for example, processing_date.txt) and a cBase node pointing to the parameterized Build script (as shown in the following script). By setting the Parameter name to "Processing Date" and the Value to $(Processing Date) on the cBase node, the value used by the Build script varies each time the two line text file is updated with a new date.
build { text-input "/temp/processing_date.txt" take-parameter "Processing Date"default="now" extract-time "$(Processing Date)" text-input "/data/orders.txt" { column "Order Num"type="string" column "Customer" column "Salesperson" column "Revenue" } output "/cbases/orders.cbase" }
If the input file has separate string columns for the year and month, they can be used together to create a period type column. For example:
build { text-input "/y-m-d-cols.txt" { column "Year"type="string" column "Month"type="string" column "Day"type="string" } add "YearMoPeriod" ``` period("gregorian month", concat(value("Year"),"/", value("Month")), "YYYY/MM") ``` output "/y-m-d-cols.cbase" }
If the input file has a string column with both the date and time (for example: YYYY-MM-DD HH:MM:SS), you can use the date portion and parse it as a date to create a period column. For example:
build { text-input "input.txt" { column "AccountingDate"type="string" // plus the rest } replace "AccountingDate" `date(substr(value("Accounting Date"), 1, 10))` add "AccMonth" `period("gregorian month", value("AccountingDate"))` output "output.cbase" }
When possible, use input-format to interpret the text date fields and build the period column in the cBase. For example:
column "YearMth" type="period" input-format="YYYY/MM" calendar="gregorian month" column "Quarter" type="period" input-format="YYYY/Q" format="YYYY/Q" calendar="gregorian quarter" column "Ship Date" type="date" input-format="YYYY/MM/DD" format="MM/DD/YYYY"
To filter out all but the first occurrence of a particular column, the data needs to be sorted so that you can use the on_previous_row() function. For example:
build { text-input "data.txt" { column "Sales Region"type="string" column "Plan Units"type="integer" ... } // Sort by Sales Region,and within each Sales Region by Plan Units // so the record to keep appears first sort { column "Sales Region" column "Plan Units"reverse=true } // Keep a record only if Sales Region differs from what is on the previous row filter `value("Sales Region") != on_previous_row(value("Sales Region"))` output "my.cbase" }
To filter out duplicate rows based on the Customer column for a series of inputs, be sure to place the sort under the build tag so that it sorts the union of the files, rather than each individual file. For example:
build { cbase-input "demo_drs*.cbase" sort { column "Customer" } filter `value("Customer") != on_previous_row(value("Customer"))` output "CustomerSummary.cbase" }
This Build sorts up on the ID and down on the date column. Then, the build uses a filter to detect when the ID changes from one row to the next. The result is a cBase with the most recent records for each ID.
build { text-input "data.txt" { column "ID"type="string" column "Status"type="string" column "Status date"type="date" } sort { column "ID" column "Status date"reverse=true } filter `on_previous_row(value("ID")) != value("ID")` output "data.cbase"
}
When you need to apply different filters to multiple inputs, be sure to put the filter in the input block. For example:
build { cbase-input "sales.cbase"{ filter `value("Year")="2017"or value("Year")="2016"or value("Year")="2014"` } cbase-input "sales_aop.cbase" { filter `value("Year")="2015"` } output "Sales_and_AOP_merged.cbase" }
When you build a cBase, you can manipulate the data types on columns in the build script. In the following snippet, the service date starts out as a date, is changed to a string, and is used as a date to determine the service year. Note that each Spectre function returns a specific data type.
build { locale "en_US" text-input "/data/transactions.txt" { column "Service Date"type="date"format="YYYY-MM-DD"input-format="YYYY/MM/DD" replace "Service Date" `replace(string(value("Service Date")),"/","-")` add "Service Year" `year(date(value("Service Date"),"YYYY-MM-DD"))` } output "/cbases/transactions.cbase"
}
When you build, you might want to refine the tab delimited columns into more columns, such as city, state, and ZIP Code. This is done by using various functions in Spectre expressions. For example:
build { text-input "dunkin.txt" { column "Location" column "Address" column "icon" rename "Location" "Long-Lat" // Assuming the Address is "Something Something...,CITY,XX, 01234" // you can capture &chop off zipcode,then find comma after the city name, // and parse out the state value,etc add "Zipcode" `substr(value("Address"), length(value("Address"))- 4)` replace "Address" `substr(value("Address"), 1, length(value("Address"))- 6)` add "State" `scan(value("Address"),-1, ", ")` add "City" `ltrim(scan(value("Address"),-2, ",")) + ", " + value("State")` replace "Address" `substr(value("Address"), 1, rindex(value("Address"), ",")- 1)` replace "Address" `substr(value("Address"), 1, rindex(value("Address"), ",")- 1)` add "Brand" `"Dunkin' Donuts"` } output "test.cbase" }
When you build cBases, you can include data such as extract time, comments, and footer text. For example:
build { take-parameter "Extract Time"default="now" text-input "basic.txt" // $COMMENTS description "Basic cBase showing tags used by $-variables in Reports" // $EXTRACT_TIME / $EXTRACT_DATE extract-time "$(Extract Time)" // Examples showing allowed values:Timestamp,date only,or "now" // extract-time "2010-05-23 11:34:12" // extract-time "2010-05-25" // extract-time "now" // custom-properties are accessed via $MODEL_VAR() // $MODEL_VAR(color) custom-property "color" "blue" // "Footer"is accessible two ways, $MODEL_FOOTER or $MODEL_VAR(Footer) custom-property "footer" "Confidential" output "basic.cbase" }
The Info Field concept from Diver Solution can be simulated in Spectre Builds by using the required-dimension tag. The column flagged as such only displays in the window if the associated dimension is present. In this example, Address1 displays when Customer is in the dive window. You can promote Address1 to the console for diving in ProDiver or DiveMaster (as a Dynamic Dimension). State also displays when Customer is in the dive window. However, use of the suggested-dimension tag with a true value adds State to the console automatically.
You can flag columns to not display as Dimensions by default by using the suggested-dimension tag with a value of false.
build { text-input "demo_drl.txt" { column "Order Num" type="string" column "Date" type="integer" column "YearMo" type="period"calendar="gregorian month"format="YYYY/MM" column "Quarter" type="period"calendar="gregorian quarter"format="YYYY/Q" column "Salesperson" type="string" column "SIC Desc" type="string"suggested-dimension=false column "Customer" type="string" column "Address1" type="string"required-dimension="Customer" column "State" type="string"required-dimension="Customer"suggested-dimension=true column "Product Name"type="string" column "Units" type="integer" column "Cost" type="fixed100" column "Revenue" type="fixed100"format="$#,#.00" } replace "Date" `date_from_date_number(value("Date"))` output "demo_drl.cbase" }
You can create a month field from a date in a cBase build. The field is sorted by default as a string: April, August, and so on. In order to sort the data by calendar month, that is January, February, March, and so on, you have some options when creating the alternate sort.
For example, suppose you had a months.txt file with two columns: MonthName and MonthSort.
You could create a Build:
build { text-input "months.txt" { column "MonthName"type="string"sort-by="MonthSort" column "MonthSort"type="integer" } remove "MonthSort" output "months.cbase" }
A Dive on this cBase would return the following:
MonthName |
January |
February |
March |
April |
... |
You can define a Build file which sorts month values based on a lookup using the text file described above. For example:
build { text-input "main.txt" { column "SomeData"type="string" column "Month"type="string" } lookup { text-input "months.txt" column "MonthSort" key "Month" "MonthName" } replace "Month" `value("Month")`sort-by="MonthSort" remove "MonthSort" output "main.cbase" }
Instead of using an external file for the month number, it can be determined with a calc. Expanding upon the example given above, you can pull the month name and the month number out of the date column, and sort the month names by the month numbers:
build { text-input "data.txt" { column "Date"type="date"format="MM/DD/YYYY" ... } add "MonthSort" `extract(value("Date"), "month")` add "MonthName" `month_name(value("Date"))`sort-by="MonthSort" remove "MonthSort" output "months.cbase" }
This can also be done at run-time, as happens when you perform a Date Roll-up and select "Month Only" in ProDiver. This uses an expression dimension in a cPlan. Such expression-based dimensions can also be sorted by an expression. For example:
cplan { cbase-input "/cbases/demo_drs.cbase" ``` dimension "Month" `month_name(value("Order Date"))` sort-by=`extract(value("Order Date"), "month")` ``` }
TIP: The build-time solution avoids any potential overhead at run-time.
See also:
- Build Code Lookup Samples
- General Build Tags
- Text, cBase, and Dive Input Build Tags
- ODBC Input Build Tags